Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,404)

Search Parameters:
Keywords = empirical approaches

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 547 KB  
Article
Digital Transformation and Supply Chain Resilience in Resource-Constrained Regions: Evidence from Central and Western China
by Yang Jiang and Jijing Hang
Sustainability 2026, 18(2), 802; https://doi.org/10.3390/su18020802 (registering DOI) - 13 Jan 2026
Abstract
In recent years, global supply chains have become increasingly vulnerable to geopolitical tensions, pandemics, and energy crises, particularly in resource-constrained regions characterized by weak infrastructure and high transaction costs. Using panel data on A-share listed firms in China’s central and western regions from [...] Read more.
In recent years, global supply chains have become increasingly vulnerable to geopolitical tensions, pandemics, and energy crises, particularly in resource-constrained regions characterized by weak infrastructure and high transaction costs. Using panel data on A-share listed firms in China’s central and western regions from 2010 to 2022, this study examines the effect of firm-level digital transformation on supply chain resilience. We construct a digital transformation index and employ an instrumental-variable approach based on the interaction between terrain ruggedness and lagged digital transformation to address endogeneity concerns. Empirical results show that the digital transformation of enterprises has significantly enhanced the resistance and recovery capabilities of the supply chain, verifying its effectiveness in resource-constrained environments. Mechanism analyses reveal that this effect operates through increased supply chain diversification—especially customer diversification—and improved supply–demand matching enabled by more accurate demand forecasting and inventory management. Heterogeneity tests indicate that the resilience-enhancing effects are more pronounced among non-state-owned firms, manufacturing enterprises, and firms in less technology-intensive industries. Overall, our findings provide empirical support for transaction cost economics, dynamic capability theory, and the resource-based view, highlighting the strategic role of digital investment in strengthening supply chain resilience in infrastructure-constrained settings and contributing to the aims of Sustainable Development Goal 9. Full article
47 pages, 3006 KB  
Review
A Systematic Review of the Scalability of Building-Integrated Photovoltaics from a Multidisciplinary Perspective
by Baitong Li, Dian Zhou, Mengyuan Zhou, Duo Xu, Qian Zhang, Yingtao Qi, Zongzhou Zhu and Yujun Yang
Buildings 2026, 16(2), 332; https://doi.org/10.3390/buildings16020332 (registering DOI) - 13 Jan 2026
Abstract
Over the past two decades, Building-Integrated Photovoltaics (BIPV) has become a core technology in the green building sector, driven by global carbon-neutrality goals and the growing demand for sustainable design. This review adopts a scalability-oriented perspective and systematically examines 82 peer-reviewed articles published [...] Read more.
Over the past two decades, Building-Integrated Photovoltaics (BIPV) has become a core technology in the green building sector, driven by global carbon-neutrality goals and the growing demand for sustainable design. This review adopts a scalability-oriented perspective and systematically examines 82 peer-reviewed articles published between 2001 and 2025. The results indicate that existing research is dominated by studies on electrical and thermal performance, with East Asia and Europe—particularly China, Japan, and Germany—emerging as the most active regions. This dominance matters for scalability because real projects must satisfy comfort, compliance, buildability, and operation/maintenance constraints alongside energy yield; limited evidence in these dimensions increases delivery risk when transferring solutions across regions and building types. Accordingly, we interpret the observed distribution as an evidence-maturity pattern: performance gains are increasingly well characterized, whereas deployment-relevant uncertainties (e.g., boundary-condition sensitivity and validation depth) remain less consistently reported. Multidimensional integration of thermal, optical, and electrical functions is gaining momentum; however, user-centered performance dimensions remain underexplored. Simulation-based approaches still prevail, whereas large-scale empirical studies are limited. The review also reveals extensive interdisciplinary collaboration but also identifies a notable lack of architectural perspectives. Using Biblioshiny, this study maps co-authorship networks and research structures. Based on the evidence, we propose future research directions to enhance the practical scalability of BIPV, including strengthening interdisciplinary integration, expanding empirical validation, and developing product-level design strategies. Full article
(This article belongs to the Special Issue Carbon-Neutral Pathways for Urban Building Design)
24 pages, 1203 KB  
Article
Towards Data-Driven Decisions in Agriculture—A Proposed Data Quality Framework for Grains Trials Research
by Aakansha Chadha, Nathan Robinson and Judy Channon
Data 2026, 11(1), 19; https://doi.org/10.3390/data11010019 - 13 Jan 2026
Abstract
Future agriculture will depend on smart systems and digital technologies to improve food production and sustainability. Data-driven methods, such as artificial intelligence, will become integral to agricultural research and development, transforming how decisions are made and how sustainability goals are achieved. Reliable, high-quality [...] Read more.
Future agriculture will depend on smart systems and digital technologies to improve food production and sustainability. Data-driven methods, such as artificial intelligence, will become integral to agricultural research and development, transforming how decisions are made and how sustainability goals are achieved. Reliable, high-quality data is essential to ensure that research users can trust their conclusions and decisions. To achieve this, a standard for assessing and reporting data quality is required to realise the full potential of data-driven agriculture. Two practical and empirical data quality assessment tools are proposed—a trial data quality test (primarily for data contributors) and a trial data quality statement (for data users). These tools provide information on data qualities assessed for contributors to the submitted trial data and those seeking to use the data for decision support purposes. An action case study using the Online Farm Trials platform illustrates their application. The proposed data quality framework provides a consistent approach for evaluating trial quality and determining fitness for purpose. Flexible and adaptable, the DQF and its tools can be tailored to different agricultural contexts, strengthening confidence in data-driven decision-making and advancing sustainable agriculture. Full article
Show Figures

Figure 1

40 pages, 934 KB  
Article
Learning Model Based on Early Psychological Development and the Constitutive Role of Relationship
by José Víctor Orón Semper and Inmaculada Lizasoain Iriso
Educ. Sci. 2026, 16(1), 116; https://doi.org/10.3390/educsci16010116 - 13 Jan 2026
Abstract
A theoretical model of learning is proposed which is grounded in the constitutive role of interpersonal relationships, integrating contributions from early developmental psychology and relational philosophy. Using a Theoretical Educational Inquiry approach, the study critically examines dominant competency-based and cognitivist models, identifying their [...] Read more.
A theoretical model of learning is proposed which is grounded in the constitutive role of interpersonal relationships, integrating contributions from early developmental psychology and relational philosophy. Using a Theoretical Educational Inquiry approach, the study critically examines dominant competency-based and cognitivist models, identifying their inability to account for learning as a deep personal transformation. Drawing on authors such as Stern, Trevarthen, Hobson, Winnicott, and Kohut, it presents empirical evidence that the self and cognitive-affective capacities emerge within primary relational bonds. However, interpersonal relationships are not the environment where development occurs, but the end towards which it is oriented: if the relational bond is the point of departure, the interpersonal encounter is the telos shaping the whole process. The child’s engagement with inner and outer worlds is driven by the search for such encounter, irreducible to mere relational pleasantness, although this may indicate its realization. Philosophical perspectives from Polo, Levinas, Buber, Whitehead, Spaemann, and Marcel support the understanding of learning as a relational event of co-constitution. Learning implies cycles of crisis and reintegration. This approach shifts the focus from skill acquisition as an end to using it as a means for fostering meaningful interpersonal relationships, thereby reorienting education towards a dignity-centered paradigm. Full article
Show Figures

Figure 1

24 pages, 7986 KB  
Article
GVMD-NLM: A Hybrid Denoising Method for GNSS Buoy Elevation Time Series Using Optimized VMD and Non-Local Means Filtering
by Huanghuang Zhang, Shengping Wang, Chao Dong, Guangyu Xu and Xiaobo Cai
Sensors 2026, 26(2), 522; https://doi.org/10.3390/s26020522 - 13 Jan 2026
Abstract
GNSS buoys are essential for real-time elevation monitoring in coastal waterways, yet the vertical coordinate time series are frequently contaminated by complex non-stationary noise, and existing denoising methods often rely on empirical parameter settings that compromise reliability. This paper proposes GVMD-NLM, a hybrid [...] Read more.
GNSS buoys are essential for real-time elevation monitoring in coastal waterways, yet the vertical coordinate time series are frequently contaminated by complex non-stationary noise, and existing denoising methods often rely on empirical parameter settings that compromise reliability. This paper proposes GVMD-NLM, a hybrid denoising framework optimized by an improved Grey Wolf Optimizer (GWO). The method introduces an adaptive convergence factor decay function derived from the Sigmoid function to automatically determine the optimal parameters (K and α) for Variational Mode Decomposition (VMD). Sample Entropy (SE) is then employed to identify low-frequency effective signals, while the remaining high-frequency noise components are processed via Non-Local Means (NLM) filtering to recover residual information while suppressing stochastic disturbances. Experimental results from two datasets at the Dongguan Waterway Wharf demonstrate that GVMD-NLM consistently outperforms SSA, CEEMDAN, VMD, and GWO-VMD. In Dataset One, GVMD-NLM reduced the RMSE by 26.04% (vs. SSA), 17.87% (vs. CEEMDAN), 24.28% (vs. VMD), and 13.47% (vs. GWO-VMD), with corresponding SNR improvements of 11.13%, 7.00%, 10.18%, and 5.05%. In Dataset Two, the method achieved RMSE reductions of 28.87% (vs. SSA), 17.12% (vs. CEEMDAN), 18.45% (vs. VMD), and 10.26% (vs. GWO-VMD), with SNR improvements of 10.48%, 5.52%, 6.02%, and 3.11%, respectively. The denoised signal maintains high fidelity, with correlation coefficients (R) reaching 0.9798. This approach provides an objective and automated solution for GNSS data denoising, offering a more accurate data foundation for waterway hydrodynamics research and water level monitoring. Full article
(This article belongs to the Special Issue Advances in GNSS Signal Processing and Navigation—Second Edition)
Show Figures

Figure 1

16 pages, 421 KB  
Article
A Note on Tutz’s Pairwise Separation Estimator
by Alexander Robitzsch
AppliedMath 2026, 6(1), 13; https://doi.org/10.3390/appliedmath6010013 - 13 Jan 2026
Abstract
The Rasch model has the desirable property that item parameter estimation can be separated from person parameter estimation. This implies that no assumptions about the ability distribution are required when estimating item difficulties. Pairwise estimation approaches in the Rasch model exploit this principle [...] Read more.
The Rasch model has the desirable property that item parameter estimation can be separated from person parameter estimation. This implies that no assumptions about the ability distribution are required when estimating item difficulties. Pairwise estimation approaches in the Rasch model exploit this principle by estimating item difficulties solely from sample proportions of respondents who answer item i correctly and item j incorrectly. A recent contribution by Tutz introduced Tutz’s pairwise separation estimator (TPSE) for the more general class of homogeneous monotone (HM) models, extending the idea of pairwise estimation to this broader setting. The present article examines the asymptotic behavior of the TPSE within the Rasch model as a special case of the HM framework. It should be emphasized that both analytical derivations and a numerical illustration show that the TPSE yields asymptotically biased item parameter estimates, rendering the estimator inconsistent, even for a large number of items. Consequently, the TPSE cannot be recommended for empirical applications. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Show Figures

Figure 1

22 pages, 15609 KB  
Article
Where in the World Should We Produce Green Hydrogen? An Objective First-Pass Site Selection
by Moe Thiri Zun and Benjamin Craig McLellan
Hydrogen 2026, 7(1), 11; https://doi.org/10.3390/hydrogen7010011 - 13 Jan 2026
Abstract
Many nations have been investing in hydrogen energy in the most recent wave of development and numerous projects have been proposed, yet a substantial share of these projects remain at the conceptual or feasibility stage and have not progressed to final investment decision [...] Read more.
Many nations have been investing in hydrogen energy in the most recent wave of development and numerous projects have been proposed, yet a substantial share of these projects remain at the conceptual or feasibility stage and have not progressed to final investment decision or operation. There is a need to identify initial potential sites for green hydrogen production from renewable energy on an objective basis with minimal upfront cost to the investor. This study develops a decision support system (DSS) for identifying optimal locations for green hydrogen production using solar and wind resources that integrate economic, environmental, technical, social, and risk and safety factors through advanced Multi-Criteria Decision Making (MCDM) techniques. The study evaluates alternative weighting scenarios using (a) occurrence-based, (b) PageRank-based, and (c) equal weighting approaches to minimize human bias and enhance decision transparency. In the occurrence-based approach (a), renewable resource potential receives the highest weighting (≈34% total weighting). By comparison, approach (b) redistributes importance toward infrastructure and social indicators, yielding a more balanced representation of technical and economic priorities and highlighting the practical value of capturing interdependencies among indicators for resource-efficient site selection. The research also contrasts the empirical and operational efficiencies of various weighting methods and processing stages, highlighting strengths and weaknesses in supporting sustainable and economically viable site selection. Ultimately, this research contributes significantly to both academic and practical implementations in the green hydrogen sector, providing a strategic, data-driven approach to support sustainable energy transitions. Full article
Show Figures

Figure 1

21 pages, 860 KB  
Review
Early Antifungal Treatment in Immunocompromised Patients, Including Hematological and Critically Ill Patients
by Galina Klyasova, Galina Solopova, Jehad Abdalla, Marina Popova, Muhlis Cem Ar, Murat Sungur, Riad El Fakih, Reem S. Almaghrabi and Murat Akova
J. Fungi 2026, 12(1), 59; https://doi.org/10.3390/jof12010059 - 13 Jan 2026
Abstract
(1) Background: Invasive fungal diseases (IFDs) represent significant challenges in clinical practice, particularly among immunocompromised individuals, leading to substantial morbidity and mortality. The present document aims to provide evidence-based consensus for the timely initiation of antifungal treatment, focusing on early empiric approaches among [...] Read more.
(1) Background: Invasive fungal diseases (IFDs) represent significant challenges in clinical practice, particularly among immunocompromised individuals, leading to substantial morbidity and mortality. The present document aims to provide evidence-based consensus for the timely initiation of antifungal treatment, focusing on early empiric approaches among immunocompromised patients. (2) Methods: A multidisciplinary expert panel of nine healthcare professionals (HCPs) reviewed the literature, including guidelines and consensus reports (2013–2023; PubMed, Scopus). The panel defined appropriate empiric antifungal approaches for invasive candidiasis, aspergillosis, and mucormycosis among hematological and critically ill patients. Consensus was defined as ≥75% agreement. (3) Results: A total of 47 statements were included. The experts recommend that early targeted antifungal therapy is critical for high-risk patients with suspected IFDs. Empiric therapy may be initiated before definitive diagnosis, considering the local fungal prevalence and the patient’s risk category. Close monitoring is essential, and switching between antifungal classes may be necessary for patients who experience deterioration or side effects. The transition from intravenous to oral therapy depends on the specific infection, the availability of therapeutic drug monitoring, and the patient’s progress. (4) Conclusions: Implementing this targeted, early approach may improve the outcomes of vulnerable patients with IFDs. Full article
(This article belongs to the Section Fungal Pathogenesis and Disease Control)
Show Figures

Figure 1

12 pages, 20475 KB  
Article
Perceiving Through the Painted Surface: Viewer-Dependent Depth Illusion in a Renaissance Work
by Siamak Khatibi, Yuan Zhou and Linus de Petris
Arts 2026, 15(1), 16; https://doi.org/10.3390/arts15010016 - 12 Jan 2026
Abstract
This study explores how classical painting techniques, particularly those rooted in the Renaissance tradition, can produce illusions of depth that vary with the viewer’s position. Focusing on a work rich in soft shading and subtle tonal transitions, we investigate how movement across the [...] Read more.
This study explores how classical painting techniques, particularly those rooted in the Renaissance tradition, can produce illusions of depth that vary with the viewer’s position. Focusing on a work rich in soft shading and subtle tonal transitions, we investigate how movement across the frontal plane influences the perception of spatial structure. A sequence of high-resolution photographs was taken from slightly offset viewpoints, simulating natural viewer motion. Using image alignment and pixel-wise difference mapping, we reveal perceptual shifts that suggest the presence of latent three-dimensional cues embedded within the painted surface. The findings offer visual and empirical support for concepts such as and dynamic engagement, where depth is constructed not solely by the image, but by the interaction between the artwork and the observer. Our approach demonstrates how digital analysis can enrich art historical interpretation, offering new insight into how still images can evoke the illusion of spatial presence. Full article
(This article belongs to the Section Visual Arts)
Show Figures

Figure 1

13 pages, 366 KB  
Review
Mathematical Modeling of Local Drug Delivery in the Oral Cavity: From Release Kinetics to Mini-PBPK and Local PK/PD with Applications to Periodontal Therapies
by Rafał Rakoczy, Monika Machoy-Rakoczy and Izabela Gutowska
Pharmaceutics 2026, 18(1), 101; https://doi.org/10.3390/pharmaceutics18010101 - 12 Jan 2026
Abstract
Background/Objectives: Mathematical modelling provides a quantitative way to describe the fate and action of drugs in the oral cavity, where transport processes are shaped by salivary flow, pellicle formation, biofilm structure and the wash-out effect of gingival crevicular fluid (GCF). Local pharmacokinetics in [...] Read more.
Background/Objectives: Mathematical modelling provides a quantitative way to describe the fate and action of drugs in the oral cavity, where transport processes are shaped by salivary flow, pellicle formation, biofilm structure and the wash-out effect of gingival crevicular fluid (GCF). Local pharmacokinetics in the mouth differ substantially from systemic models, and therefore a dedicated framework is required. The aim of this work was to present a structured, physiologically based concept that links in vitro release testing with local pharmacokinetics and pharmacodynamics. Methods: A narrative review with elements of systematic search was conducted in PubMed, Scopus and Web of Science (1980–2025) for publications describing drug release, local PBPK, and PK/PD modelling in the oral cavity. Mathematical formulations were grouped into release kinetics, mini-PBPK transport and local PK/PD relations. Classical models (Higuchi, Korsmeyer–Peppas, Peppas–Sahlin) were integrated with a mini-PBPK structure describing saliva–mucosa–biofilm–pocket interactions. Results: The combined model captures adsorption to pellicle, diffusion within biofilm and wash-out by GCF. It allows simulation of variable clinical conditions, such as inflammation-related changes in QGCF, and links local exposure to pharmacodynamic outcomes. Case studies with PerioChip®, Arestin®, and Atridox® demonstrate how mechanistic models explain observed therapeutic duration and low-systemic exposure. Conclusions: The proposed mini-PBPK framework bridges empirical release data and physiological transport in the oral cavity. It supports rational formulation design, optimisation of local dosage, and personalised prediction of drug retention in gingival pockets. This modelling approach can become a practical tool for the development of dental biomaterials and subgingival therapies. Full article
27 pages, 1259 KB  
Article
Living Lab Assessment Method (LLAM): Towards a Methodology for Context-Sensitive Impact and Value Assessment
by Ben Robaeyst, Tom Van Nieuwenhove, Dimitri Schuurman, Jeroen Bourgonjon, Stephanie Van Hove and Bastiaan Baccarne
Sustainability 2026, 18(2), 779; https://doi.org/10.3390/su18020779 - 12 Jan 2026
Abstract
This paper presents the Living Lab Assessment Method (LLAM), a context-sensitive framework for assessing impact and value creation in Living Labs (LLs). While LLs have become established instruments for Open and Urban Innovation, systematic and transferable approaches to evaluate their impact remain scarce [...] Read more.
This paper presents the Living Lab Assessment Method (LLAM), a context-sensitive framework for assessing impact and value creation in Living Labs (LLs). While LLs have become established instruments for Open and Urban Innovation, systematic and transferable approaches to evaluate their impact remain scarce and still show theoretical and practical barriers. This study proposes a new methodological approach that aims to address these challenges through the development of the LLAM, the Living Lab Assessment Method. This study reports a five-year iterative development process embedded in Ghent’s urban and social innovation ecosystem through the combination of three complementary methodological pillars: (1) co-creation and co-design with lead users, ensuring alignment with practitioner needs and real-world conditions; (2) multiple case study research, enabling iterative refinement across diverse Living Lab projects, and (3) participatory action research, integrating reflexive and iterative cycles of observation, implementation, and adjustment. The LLAM was empirically developed and validated across four use cases, each contributing to the method’s operational robustness and contextual adaptability. Results show that LLAM captures multi-level value creation, ranging from individual learning and network strengthening to systemic transformation, by linking participatory processes to outcomes across stakeholder, project, and ecosystem levels. The paper concludes that LLAM advances both theoretical understanding and practical evaluation of Living Labs by providing a structured, adaptable, and empirically grounded methodology for assessing their contribution to sustainable and inclusive urban innovation. Full article
(This article belongs to the Special Issue Sustainable Impact and Systemic Change via Living Labs)
Show Figures

Figure 1

16 pages, 1234 KB  
Article
Assessing the Determinants of Trust in AI Algorithms in the Conditions of Sustainable Development of the Organization
by Mariusz Salwin, Maria Kocot, Artur Kwasek, Adrianna Trzaskowska-Dmoch, Michał Pałęga and Adrian Kopytowski
Sustainability 2026, 18(2), 776; https://doi.org/10.3390/su18020776 - 12 Jan 2026
Abstract
The article addresses the problem of the insufficient empirical recognition of the determinants of trust in artificial intelligence (AI) algorithms in organizations operating under conditions of sustainable development. The aim of the study was to identify the factors shaping organizational trust in AI [...] Read more.
The article addresses the problem of the insufficient empirical recognition of the determinants of trust in artificial intelligence (AI) algorithms in organizations operating under conditions of sustainable development. The aim of the study was to identify the factors shaping organizational trust in AI and to examine how perceived trustworthiness, transparency, and effectiveness of algorithms influence their acceptance in the work environment. The research was conducted using a quantitative survey-based approach among organizational employees, which enabled the analysis of relationships between key variables and the identification of factors that strengthen or limit trust. The results indicate that algorithmic transparency, the reliability of generated outcomes, and the perceived effectiveness of AI applications significantly foster trust, whereas concerns related to errors and the decision-making autonomy of systems constitute important barriers to acceptance. Based on the findings, a conceptual and exploratory model of trust in AI was proposed, which may be used to diagnose the level of technology acceptance and to support the responsible implementation of artificial intelligence-based solutions in organizations. The contribution of the article lies in integrating organizational and technological perspectives and in providing an empirical approach to trust in AI within the context of sustainable development. Full article
(This article belongs to the Special Issue Advancing Innovation and Sustainability in SMEs and Entrepreneurship)
20 pages, 3746 KB  
Article
Fault Diagnosis and Classification of Rolling Bearings Using ICEEMDAN–CNN–BiLSTM and Acoustic Emission
by Jinliang Li, Haoran Sheng, Bin Liu and Xuewei Liu
Sensors 2026, 26(2), 507; https://doi.org/10.3390/s26020507 - 12 Jan 2026
Abstract
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble [...] Read more.
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and a convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture. The method first applies wavelet denoising to AE signals, then uses ICEEMDAN decomposition followed by kurtosis-based screening to extract key fault components and construct feature vectors. Subsequently, a CNN automatically learns deep time–frequency features, and a BiLSTM captures temporal dependencies among these features, enabling end-to-end fault identification. Experiments were conducted on a bearing acoustic emission dataset comprising 15 operating conditions, five fault types, and three rotational speeds; comparative model tests were also performed. Results indicate that ICEEMDAN effectively suppresses mode mixing (average mixing rate 6.08%), and the proposed model attained an average test-set recognition accuracy of 98.00%, significantly outperforming comparative models. Moreover, the model maintained 96.67% accuracy on an independent validation set, demonstrating strong generalization and practical application potential. Full article
(This article belongs to the Special Issue Deep Learning Based Intelligent Fault Diagnosis)
Show Figures

Figure 1

18 pages, 925 KB  
Article
A Stock Price Prediction Network That Integrates Multi-Scale Channel Attention Mechanism and Sparse Perturbation Greedy Optimization
by Jiarun He, Fangying Wan and Mingfang He
Algorithms 2026, 19(1), 67; https://doi.org/10.3390/a19010067 - 12 Jan 2026
Abstract
The stock market is of paramount importance to economic development. Investors who accurately predict stock price fluctuations based on its high volatility can effectively mitigate investment risks and achieve higher returns. Traditional time series models face limitations when dealing with long sequences and [...] Read more.
The stock market is of paramount importance to economic development. Investors who accurately predict stock price fluctuations based on its high volatility can effectively mitigate investment risks and achieve higher returns. Traditional time series models face limitations when dealing with long sequences and short-term volatility issues, often yielding unsatisfactory predictive outcomes. This paper proposes a novel algorithm, MSNet, which integrates a Multi-scale Channel Attention mechanism (MSCA) and Sparse Perturbation Greedy Optimization (SPGO) onto an xLSTM framework. The MSCA enhances the model’s spatio-temporal information modeling capabilities, effectively preserving key price features within stock data. Meanwhile, SPGO improves the exploration of optimal solutions during training, thereby strengthening the model’s generalization stability against short-term market fluctuations. Experimental results demonstrate that MSNet achieves an MSE of 0.0093 and an MAE of 0.0152 on our proprietary dataset. This approach effectively extracts temporal features from complex stock market data, providing empirical insights and guidance for time series forecasting. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
23 pages, 1141 KB  
Article
Randomized Algorithms and Neural Networks for Communication-Free Multiagent Singleton Set Cover
by Guanchu He, Colton Hill, Joshua H. Seaton and Philip N. Brown
Games 2026, 17(1), 3; https://doi.org/10.3390/g17010003 - 12 Jan 2026
Abstract
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, [...] Read more.
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, we study how agents can formulate strategies without information about other agents’ actions so that system-level performance remains robust in the presence of communication failures. First, we use an algorithmic approach to study the scenario in which all agents lose the ability to communicate with one another, have a symmetric set of resources to choose from, and select actions independently according to a probability distribution over the resources. We show that the distribution that maximizes the expected system-level objective under this approach can be computed by solving a convex optimization problem, and we introduce a novel polynomial-time heuristic based on subset selection. Further, both of the methods are guaranteed to be within 11/e of the system’s optimal in expectation. Second, we use a learning-based approach to study how a system designer can employ neural networks to approximate optimal agent strategies in the presence of communication failures. The neural network, trained on system-level optimal outcomes obtained through brute-force enumeration, generates utility functions that enable agents to make decisions in a distributed manner. Empirical results indicate the neural network often outperforms greedy and randomized baseline algorithms. Collectively, these findings provide a broad study of optimal agent behavior and its impact on system-level performance when the information available to agents is extremely limited. Full article
(This article belongs to the Section Algorithmic and Computational Game Theory)
Show Figures

Figure 1

Back to TopTop