Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (839)

Search Parameters:
Journal = Information
Section = Information Applications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 522 KiB  
Article
Enhancing Typhlo Music Therapy with Personalized Action Rules: A Data-Driven Approach
by Aileen Benedict, Zbigniew W. Ras, Pawel Cylulko and Joanna Gladyszewska-Cylulko
Information 2025, 16(8), 666; https://doi.org/10.3390/info16080666 - 4 Aug 2025
Abstract
In the context of typhlo music therapy, personalized interventions can significantly enhance the therapeutic experience for visually impaired children. Leveraging a data-driven approach, we incorporate action-rule discovery to provide insights into the factors of music that may benefit individual children. The system utilizes [...] Read more.
In the context of typhlo music therapy, personalized interventions can significantly enhance the therapeutic experience for visually impaired children. Leveraging a data-driven approach, we incorporate action-rule discovery to provide insights into the factors of music that may benefit individual children. The system utilizes a comprehensive dataset developed in collaboration with an experienced music therapist, special educator, and clinical psychologist, encompassing meta-decision attributes, decision attributes, and musical features such as tempo, rhythm, and pitch. By extracting and analyzing these features, our methodology identifies key factors that influence therapeutic outcomes. Some themes discovered through action-rule discovery include the effect of harmonic richness and loudness on expression and communication. The main findings demonstrate the system’s ability to offer personalized, impactful, and actionable insights, leading to improved therapeutic experiences for children undergoing typhlo music therapy. Our conclusions highlight the system’s potential to transform music therapy by providing therapists with precise and effective tools to support their patients’ developmental progress. This work shows the significance of integrating advanced data analysis techniques in therapeutic settings, paving the way for future enhancements in personalized music therapy interventions. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

23 pages, 1302 KiB  
Article
Deep Learning-Enhanced Ocean Acoustic Tomography: A Latent Feature Fusion Framework for Hydrographic Inversion with Source Characteristic Embedding
by Jiawen Zhou, Zikang Chen, Yongxin Zhu and Xiaoying Zheng
Information 2025, 16(8), 665; https://doi.org/10.3390/info16080665 - 4 Aug 2025
Abstract
Ocean Acoustic Tomography (OAT) is an important marine remote sensing technique used for inverting large-scale ocean environmental parameters, but traditional methods face challenges in computational complexity and environmental interference. This paper proposes a causal analysis-driven AI FOR SCIENCE method for high-precision and rapid [...] Read more.
Ocean Acoustic Tomography (OAT) is an important marine remote sensing technique used for inverting large-scale ocean environmental parameters, but traditional methods face challenges in computational complexity and environmental interference. This paper proposes a causal analysis-driven AI FOR SCIENCE method for high-precision and rapid inversion of oceanic hydrological parameters in complex underwater environments. Based on the open-source VTUAD (Vessel Type Underwater Acoustic Data) dataset, the method first utilizes a fine-tuned Paraformer (a fast and accurate parallel transformer) model for precise classification of sound source targets. Then, using structural causal models (SCM) and potential outcome frameworks, causal embedding vectors with physical significance are constructed. Finally, a cross-modal Transformer network is employed to fuse acoustic features, sound source priors, and environmental variables, enabling inversion of temperature and salinity in the Georgia Strait of Canada. Experimental results show that the method achieves accuracies of 97.77% and 95.52% for temperature and salinity inversion tasks, respectively, significantly outperforming traditional methods. Additionally, with GPU acceleration, the inference speed is improved by over sixfold, aimed at enabling real-time Ocean Acoustic Tomography (OAT) on edge computing platforms as smart hardware, thereby validating the method’s practicality. By incorporating causal inference and cross-modal data fusion, this study not only enhances inversion accuracy and model interpretability but also provides new insights for real-time applications of OAT. Full article
(This article belongs to the Special Issue Advances in Intelligent Hardware, Systems and Applications)
Show Figures

Figure 1

31 pages, 3315 KiB  
Article
Searching for the Best Artificial Neural Network Architecture to Estimate Column and Beam Element Dimensions
by Ayla Ocak, Gebrail Bekdaş, Sinan Melih Nigdeli, Umit Işıkdağ and Zong Woo Geem
Information 2025, 16(8), 660; https://doi.org/10.3390/info16080660 - 1 Aug 2025
Viewed by 184
Abstract
The cross-sectional dimensions of structural elements in a structure are design elements that need to be carefully designed and are related to the stiffness of the structure. Various optimization processes are applied to determine the optimum cross-sectional dimensions of beams or columns in [...] Read more.
The cross-sectional dimensions of structural elements in a structure are design elements that need to be carefully designed and are related to the stiffness of the structure. Various optimization processes are applied to determine the optimum cross-sectional dimensions of beams or columns in structures. By repeating the optimization processes for multiple load scenarios, it is possible to create a data set that shows the optimum design section properties. However, this step means repeating the same processes to produce the optimum cross-sectional dimensions. Artificial intelligence technology offers a short-cut solution to this by providing the opportunity to train itself with previously generated optimum cross-sectional dimensions and infer new cross-sectional dimensions. By processing the data, the artificial neural network can generate models that predict the cross-section for a new structural element. In this study, an optimization process is applied to a simple tubular column and an I-section beam, and the results are compiled to create a data set that presents the optimum section dimensions as a class. The harmony search (HS) algorithm, which is a metaheuristic method, was used in optimization. An artificial neural network (ANN) was created to predict the cross-sectional dimensions of the sample structural elements. The neural architecture search (NAS) method, which incorporates many metaheuristic algorithms designed to search for the best artificial neural network architecture, was applied. In this method, the best values of various parameters of the neural network, such as activation function, number of layers, and neurons, are searched for in the model with a tool called HyperNetExplorer. Model metrics were calculated to evaluate the prediction success of the developed model. An effective neural network architecture for column and beam elements is obtained. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

19 pages, 2871 KiB  
Article
Strategic Information Patterns in Advertising: A Computational Analysis of Industry-Specific Message Strategies Using the FCB Grid Framework
by Seung Chul Yoo
Information 2025, 16(8), 642; https://doi.org/10.3390/info16080642 - 28 Jul 2025
Viewed by 216
Abstract
This study presents a computational analysis of industry-specific advertising message strategies through the theoretical lens of the FCB (Foote, Cone & Belding) grid framework. Leveraging the AiSAC (AI Analysis System for Ad Creation) system developed by the Korea Broadcast Advertising Corporation (KOBACO), we [...] Read more.
This study presents a computational analysis of industry-specific advertising message strategies through the theoretical lens of the FCB (Foote, Cone & Belding) grid framework. Leveraging the AiSAC (AI Analysis System for Ad Creation) system developed by the Korea Broadcast Advertising Corporation (KOBACO), we analyzed 27,000 Korean advertisements across five major industries using advanced machine learning techniques. Through Latent Dirichlet Allocation topic modeling with a coherence score of 0.78, we identified five distinct message strategies: emotional appeal, product features, visual techniques, setting and objects, and entertainment and promotion. Our computational analysis revealed that each industry exhibits a unique “message strategy fingerprint” that significantly discriminates between categories, with discriminant analysis achieving 62.7% classification accuracy. Time-series analysis using recurrent neural networks demonstrated a significant evolution in strategy preferences, with emotional appeal increasing by 44.3% over the study period (2015–2024). By mapping these empirical findings onto the FCB grid, the present study validated that industry positioning within the grid’s quadrants aligns with theoretical expectations: high-involvement/think (IT and Telecom), high-involvement/feel (Public Institutions), low-involvement/think (Food and Household Goods), and low-involvement/feel (Services). This study contributes to media science by demonstrating how computational methods can empirically validate the established theoretical frameworks in advertising, providing a data-driven approach to understanding message strategy patterns across industries. Full article
(This article belongs to the Special Issue AI Tools for Business and Economics)
Show Figures

Figure 1

23 pages, 2380 KiB  
Article
DEEPEIA: Conceptualizing a Generative Deep Learning Foreign Market Recommender for SMEs
by Nuno Calheiros-Lobo, Manuel Au-Yong-Oliveira and José Vasconcelos Ferreira
Information 2025, 16(8), 636; https://doi.org/10.3390/info16080636 - 25 Jul 2025
Viewed by 284
Abstract
This study introduces the concept of DEEPEIA, a novel deep learning (DL) platform designed to recommend the optimal export market, and its ideal foreign champion, for any product or service offered by a small and medium-sized enterprise (SME). Drawing on expertise in SME [...] Read more.
This study introduces the concept of DEEPEIA, a novel deep learning (DL) platform designed to recommend the optimal export market, and its ideal foreign champion, for any product or service offered by a small and medium-sized enterprise (SME). Drawing on expertise in SME internationalization and leveraging recent advances in generative artificial intelligence (AI), this research addresses key challenges faced by SMEs in global expansion. A systematic review of existing platforms was conducted to identify current gaps and inform the conceptualization of an advanced generative DL recommender system. The Discussion section proposes the conceptual framework for such a decision optimizer within the context of contemporary technological advancements and actionable insights. The conclusion outlines future research directions, practical implementation strategies, and expected obstacles. By mapping the current landscape and presenting an original forecasting tool, this work advances the field of AI-enabled SME internationalization while still acknowledging that more empirical validation remains a necessary next step. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

17 pages, 977 KiB  
Article
Evaluation of Learning-Based Models for Crop Recommendation in Smart Agriculture
by Muhammad Abu Bakr, Ahmad Jaffar Khan, Sultan Daud Khan, Mohammad Haseeb Zafar, Mohib Ullah and Habib Ullah
Information 2025, 16(8), 632; https://doi.org/10.3390/info16080632 - 24 Jul 2025
Viewed by 468
Abstract
The use of intelligent crop recommendation systems has become crucial in the era of smart agriculture to increase yield and enhance resource utilization. In this study, we compared different machine learning (ML), and deep learning (DL) models utilizing structured tabular data for crop [...] Read more.
The use of intelligent crop recommendation systems has become crucial in the era of smart agriculture to increase yield and enhance resource utilization. In this study, we compared different machine learning (ML), and deep learning (DL) models utilizing structured tabular data for crop recommendation. During our experimentation, both ML and DL models achieved decent performance. However, their architectures are not suited for setting up conversational systems. To overcome this limitation, we converted the structured tabular data to descriptive textual data and utilized it to fine-tune Large Language Models (LLMs), including BERT and GPT-2. In comprehensive experiments, we demonstrated that GPT-2 achieved a higher accuracy of 99.55% than the best-performing ML and DL models, while maintaining precision of 99.58% and recall of 99.55%. We also demonstrated that GPT-2 not only keeps up competitive accuracy but also offers natural language interaction capabilities. Due to this capability, it is a viable option to be used for real-time agricultural decision support systems. Full article
Show Figures

Figure 1

23 pages, 650 KiB  
Article
Exercise-Specific YANG Profile for AI-Assisted Network Security Labs: Bidirectional Configuration Exchange with Large Language Models
by Yuichiro Tateiwa
Information 2025, 16(8), 631; https://doi.org/10.3390/info16080631 - 24 Jul 2025
Viewed by 190
Abstract
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that [...] Read more.
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that could serve as tutors. We address this interoperability gap with an exercise-oriented YANG profile that augments the Internet Engineering Task Force (IETF) ietf-network module with a new network-devices module. The profile expresses Linux interface settings, routing, and firewall rules, and tags each node with roles such as linux-server or linux-firewall. Integrated into our LiNeS Cloud platform, it enables LLMs to both parse and generate machine-readable network states. We evaluated the profile on four topologies—from a simple client–server pair to multi-subnet scenarios with dedicated security devices—using ChatGPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 Flash. Across 1050 evaluation tasks covering profile understanding (n = 180), instance analysis (n = 750), and instance generation (n = 120), the three LLMs answered correctly in 1028 cases, yielding an overall accuracy of 97.9%. Even with only minimal follow-up cues (≦3 turns) —rather than handcrafted prompt chains— analysis tasks reached 98.1% accuracy and generation tasks 93.3%. To our knowledge, this is the first exercise-focused YANG profile that simultaneously captures Linux/iptables semantics and is empirically validated across three proprietary LLMs, attaining 97.9% overall task accuracy. These results lay a practical foundation for artificial intelligence (AI)-assisted security labs where real-time feedback and scenario generation must scale beyond human instructor capacity. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

15 pages, 2123 KiB  
Article
Multi-Class Visual Cyberbullying Detection Using Deep Neural Networks and the CVID Dataset
by Muhammad Asad Arshed, Zunera Samreen, Arslan Ahmad, Laiba Amjad, Hasnain Muavia, Christine Dewi and Muhammad Kabir
Information 2025, 16(8), 630; https://doi.org/10.3390/info16080630 - 24 Jul 2025
Viewed by 267
Abstract
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media [...] Read more.
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media platforms necessitates new approaches to address cyberbullying using images. This domain has been largely overlooked. In this paper, we present a novel dataset specifically designed for the detection of visual cyberbullying, encompassing four distinct classes: abuse, curse, discourage, and threat. The initial prepared dataset (cyberbullying visual indicators dataset (CVID)) comprised 664 samples for training and validation, expanded through data augmentation techniques to ensure balanced and accurate results across all classes. We analyzed this dataset using several advanced deep learning models, including VGG16, VGG19, MobileNetV2, and Vision Transformer. The proposed model, based on DenseNet201, achieved the highest test accuracy of 99%, demonstrating its efficacy in identifying the visual cues associated with cyberbullying. To prove the proposed model’s generalizability, the 5-fold stratified K-fold was also considered, and the model achieved an average test accuracy of 99%. This work introduces a dataset and highlights the potential of leveraging deep learning models to address the multifaceted challenges of detecting cyberbullying in visual content. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

38 pages, 371 KiB  
Article
How ChatGPT’s Semantic Parrotting (Compared to Gemini’s) Impacts Text Summarization with Literary Text
by Rodolfo Delmonte, Giulia Marchesini and Nicolò Busetto
Information 2025, 16(8), 623; https://doi.org/10.3390/info16080623 - 22 Jul 2025
Viewed by 391
Abstract
In this paper we explore ChatGPT’s ability to produce a summary, a precis, and/or an essay on the basis of excerpts from a novel—The Solid Mandala—by Nobel Prize Australian writer Patrick White. We use a number of prompts to test a [...] Read more.
In this paper we explore ChatGPT’s ability to produce a summary, a precis, and/or an essay on the basis of excerpts from a novel—The Solid Mandala—by Nobel Prize Australian writer Patrick White. We use a number of prompts to test a number of functions related to narrative analysis from the point of view of the “sujet”, the “fable”, and the style. In the paper, we illustrate extensively a number of recurrent semantic mistakes that can badly harm the understanding of the contents of the novel. We made a list of 12 different types of semantic mistakes or parrotting we found GPT made, which can be regarded as typical for stochastic-based generation. We then tested Gemini for the same 12 mistakes and found a marked improvement in all critical key issues. The conclusion for ChatGPT is mostly negative. We formulate an underlying hypothesis for its worse performance, the influence of vocabulary size, which in Gemini is seven times higher than in GPT. Full article
29 pages, 758 KiB  
Article
Value Co-Creation for E-Government Services in Small Island Developing Nations: A Case Study
by Wilford Gibson Lol, Krassie Petrova and Sarita Pais
Information 2025, 16(7), 613; https://doi.org/10.3390/info16070613 - 17 Jul 2025
Viewed by 250
Abstract
The adoption of e-government services in Small Island Developing Nations (SIDNs) aims to enhance public service efficiency, inclusiveness, and quality. However, e-government service development in SIDNs faces some significant constraints, including limited resources, geographical isolation, low digital literacy levels, and inadequate technological infrastructure. [...] Read more.
The adoption of e-government services in Small Island Developing Nations (SIDNs) aims to enhance public service efficiency, inclusiveness, and quality. However, e-government service development in SIDNs faces some significant constraints, including limited resources, geographical isolation, low digital literacy levels, and inadequate technological infrastructure. This study investigates value co-creation approaches in e-government service, aiming to identify specific value co-creation processes and methods to support sustainable e-government initiatives in SIDN settings. The study applies a qualitative approach; based on the thematic analysis of interviews with government stakeholders, it identifies contextual factors and conditions that influence e-government value co-creation processes in SIDNs and strategies for sustainable e-government service value co-creation. This study contributes a value co-creation framework that applies participatory design, agile development, collaborative governance, socio-technical thinking, and technology adaptation as methods for the design and implementation of flexible and inclusive e-government services that are responsive to local needs, resilient to challenges, and sustainable over time. The framework can be used by policymakers and practitioners to facilitate sustainable digital transformation in SIDNs through collaborative governance, active participation, and civic engagement with innovative technologies. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

22 pages, 1199 KiB  
Article
Less Is More: Analyzing Text Abstraction Levels for Gender and Age Recognition Across Question-Answering Communities
by Alejandro Figueroa
Information 2025, 16(7), 602; https://doi.org/10.3390/info16070602 - 13 Jul 2025
Viewed by 182
Abstract
In social networks like community Question-Answering (cQA) services, members interact with each other by asking and answering each other’s questions. This way they find counsel and solutions to very specific real-life situations. Thus, it is safe to say that community fellows log into [...] Read more.
In social networks like community Question-Answering (cQA) services, members interact with each other by asking and answering each other’s questions. This way they find counsel and solutions to very specific real-life situations. Thus, it is safe to say that community fellows log into this kind of social network with the goal of satisfying information needs that cannot be readily resolved via traditional web searches. And in order to expedite this process, these platforms also allow registered, and many times unregistered, internauts to browse their archives. As a means of encouraging fruitful interactions, these websites need to be efficient when displaying contextualized/personalized material and when connecting unresolved questions to people willing to help. Here, demographic factors (i.e., gender) together with frontier deep neural networks have proved to be instrumental in adequately overcoming these challenges. In fact, current approaches have demonstrated that it is perfectly plausible to achieve high gender classification rates by inspecting profile images or textual interactions. This work advances this body of knowledge by leveraging lexicalized dependency paths to control the level of abstraction across texts. Our qualitative results suggest that cost-efficient approaches exploit distilled frontier deep architectures (i.e., DistillRoBERTa) and coarse-grained semantic information embodied in the first three levels of the respective dependency tree. Our outcomes also indicate that relative/prepositional clauses conveying geographical locations, relationships, and finance yield a marginal contribution when they show up deep in dependency trees. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

16 pages, 1730 KiB  
Article
Retail Demand Forecasting: A Comparative Analysis of Deep Neural Networks and the Proposal of LSTMixer, a Linear Model Extension
by Georgios Theodoridis and Athanasios Tsadiras
Information 2025, 16(7), 596; https://doi.org/10.3390/info16070596 - 11 Jul 2025
Viewed by 591
Abstract
Accurate retail demand forecasting is integral to the operational efficiency of any retail business. As demand is described over time, the prediction of demand is a time-series forecasting problem which may be addressed in a univariate manner, via statistical methods and simplistic machine [...] Read more.
Accurate retail demand forecasting is integral to the operational efficiency of any retail business. As demand is described over time, the prediction of demand is a time-series forecasting problem which may be addressed in a univariate manner, via statistical methods and simplistic machine learning approaches, or in a multivariate fashion using generic deep learning forecasters that are well-established in other fields. This study analyzes, optimizes, trains and tests such forecasters, namely the Temporal Fusion Transformer and the Temporal Convolutional Network, alongside the recently proposed Time-Series Mixer, to accurately forecast retail demand given a dataset of historical sales in 45 stores with their accompanied features. Moreover, the present work proposes a novel extension of the Time-Series Mixer architecture, the LSTMixer, which utilizes an additional Long Short-Term Memory block to achieve better forecasts. The results indicate that the proposed LSTMixer model is the better predictor, whilst all the other aforementioned models outperform the common statistical and machine learning methods. An ablation test is also performed to ensure that the extension within the LSTMixer design is responsible for the improved results. The findings promote the use of deep learning models for retail demand forecasting problems and establish LSTMixer as a viable and efficient option. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

34 pages, 947 KiB  
Review
Multimodal Artificial Intelligence in Medical Diagnostics
by Bassem Jandoubi and Moulay A. Akhloufi
Information 2025, 16(7), 591; https://doi.org/10.3390/info16070591 - 9 Jul 2025
Viewed by 1112
Abstract
The integration of artificial intelligence into healthcare has advanced rapidly in recent years, with multimodal approaches emerging as promising tools for improving diagnostic accuracy and clinical decision making. These approaches combine heterogeneous data sources such as medical images, electronic health records, physiological signals, [...] Read more.
The integration of artificial intelligence into healthcare has advanced rapidly in recent years, with multimodal approaches emerging as promising tools for improving diagnostic accuracy and clinical decision making. These approaches combine heterogeneous data sources such as medical images, electronic health records, physiological signals, and clinical notes to better capture the complexity of disease processes. Despite this progress, only a limited number of studies offer a unified view of multimodal AI applications in medicine. In this review, we provide a comprehensive and up-to-date analysis of machine learning and deep learning-based multimodal architectures, fusion strategies, and their performance across a range of diagnostic tasks. We begin by summarizing publicly available datasets and examining the preprocessing pipelines required for harmonizing heterogeneous medical data. We then categorize key fusion strategies used to integrate information from multiple modalities and overview representative model architectures, from hybrid designs and transformer-based vision-language models to optimization-driven and EHR-centric frameworks. Finally, we highlight the challenges present in existing works. Our analysis shows that multimodal approaches tend to outperform unimodal systems in diagnostic performance, robustness, and generalization. This review provides a unified view of the field and opens up future research directions aimed at building clinically usable, interpretable, and scalable multimodal diagnostic systems. Full article
Show Figures

Graphical abstract

27 pages, 1836 KiB  
Article
Benchmarking Virtual Physics Labs: A Multi-Method MCDA Evaluation of Curriculum Compliance and Pedagogical Efficacy
by Rama M. Bazangika, Ruffin-Benoît M. Ngoie, Jean-Roger M. Bansimba, God’El K. Kinyoka and Billy Nzau Matondo
Information 2025, 16(7), 587; https://doi.org/10.3390/info16070587 - 8 Jul 2025
Viewed by 357
Abstract
In this paper, we propose the use of virtual labs (VLs) as a solution to bridge the gap between theory and practice in physics education. Through an experiment conducted in two towns in the Democratic Republic of the Congo (DRC), we demonstrate that [...] Read more.
In this paper, we propose the use of virtual labs (VLs) as a solution to bridge the gap between theory and practice in physics education. Through an experiment conducted in two towns in the Democratic Republic of the Congo (DRC), we demonstrate that our proposed lab (BRVL) is more effective than global alternatives in correcting misconceptions and ensuring compliance with the current curriculum in the DRC. We combine Conjoint Analysis (from SPSS) to weigh selected criteria—curriculum compliance, knowledge construction, misconception correction, and usability—alongside eight MCDA methods: AHP, CAHP, TOPSIS, ELECTRE I, ELECTRE II, ELECTRE TRI, PROMETHEE I, and PROMETHEE II. Our findings show that, among six VLs, BRVL consistently outperforms global alternatives like Algodoo and Physion in terms of pedagogical alignment, curriculum compliance, and correction of misconceptions for Congolese schools. Methodologically, the respondents are consistent and in agreement, despite individual differences. The sensitivity analysis of the ELECTRE and PROMETHEE methods has shown that changes in parameter values do not alter the conclusion that BRVL is the best among the compared VLs. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis, 3rd Edition)
Show Figures

Figure 1

41 pages, 3512 KiB  
Article
Using Machine Learning on Macroeconomic, Technical, and Sentiment Indicators for Stock Market Forecasting
by Michalis Patsiarikas, George Papageorgiou and Christos Tjortjis
Information 2025, 16(7), 584; https://doi.org/10.3390/info16070584 - 7 Jul 2025
Viewed by 1035
Abstract
Financial forecasting is a research and practical challenge, providing meaningful economic and strategic insights. While Machine Learning (ML) models are employed in various studies to examine the impact of technical and sentiment factors on financial markets forecasting, in this work, macroeconomic indicators are [...] Read more.
Financial forecasting is a research and practical challenge, providing meaningful economic and strategic insights. While Machine Learning (ML) models are employed in various studies to examine the impact of technical and sentiment factors on financial markets forecasting, in this work, macroeconomic indicators are also combined to forecast the Standard & Poor’s (S&P) 500 index. Initially, contextual data are scored using TextBlob and pre-trained DistilBERT-base-uncased models, and then a combined dataset is formed. Followed by preprocessing, feature engineering and selection techniques, three corresponding datasets are generated and their impact on future prices is examined, by employing ML models, such as Linear Regression (LR), Random Forest (RF), Gradient Boosting (GB), XGBoost, and Multi-Layer Perceptron (MLP). LR and MLP show robust results with high R2 scores, close to 0.998, and low error MSE and MAE rates, averaging at 350 and 13 points, respectively, across both training and test datasets, with technical indicators contributing the most to the prediction. While other models also perform very well under different dataset combinations, overfitting challenges are evident in the results, even after additional hyperparameter tuning. Potential limitations are highlighted, motivating further exploration and adaptation techniques in financial modeling that enhance predictive capabilities. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

Back to TopTop