Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (188)

Search Parameters:
Keywords = cold start problem

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 32991 KB  
Article
Insights into the Combustion and Emission Characteristics of the Diesel Engine in the Cold Start Stage
by Xuewen Zhang, Hongrui Jing, Hongling Qiu, Peiyong Ni, Zexin Zhong and Xiang Li
Sustainability 2026, 18(3), 1680; https://doi.org/10.3390/su18031680 - 6 Feb 2026
Viewed by 169
Abstract
With the widespread adoption of diesel engine technology, the problem of pollutant emissions has become increasingly prominent. Especially in the cold start stage of the diesel engine, the instantaneous pollutant emissions may be several times or even tens of times that of stable [...] Read more.
With the widespread adoption of diesel engine technology, the problem of pollutant emissions has become increasingly prominent. Especially in the cold start stage of the diesel engine, the instantaneous pollutant emissions may be several times or even tens of times that of stable operation, which adds to deterioration of the environment. Therefore, the combustion characteristics and emissions of a two-cylinder diesel engine at high altitudes and low temperatures were explored and analyzed in this research. By adjusting the injection timing and compression ratio (CR) experimentally, the optimal combination of parameters to simulate the emission at high-altitude and low-temperature conditions was determined. The results show that advancing the injection timing can improve the combustion efficiency, but higher CR and injection timing significantly influence the hydrocarbon (HC)/nitrogen oxide (NOX) trade-off. While delaying the injection timing can reduce NOX emissions, it can increase HC emissions. Increasing CR from 18.5 to 20.5 raised peak instantaneous NOX emissions by approximately 27.7% but contributed to a reduction in HC emissions. In the cold start stage, HC concentration peaked sharply and gradually stabilized, while NOX concentration rose rapidly with more fluctuations. Under high altitude conditions, HC emission normally rises with altitude. When reaching 4000 m, the HC emissions increased by 27.9% compared with 0 m but the concentration decreased at 5000 m, the NOX emission decreased with elevation, and ambient temperature had little effect. Full article
Show Figures

Figure 1

28 pages, 438 KB  
Article
Holographic Naturalness and Information See-Saw Mechanism for Neutrinos
by Andrea Addazi and Giuseppe Meluccio
Particles 2026, 9(1), 11; https://doi.org/10.3390/particles9010011 - 2 Feb 2026
Viewed by 248
Abstract
The microscopic origin of the de Sitter entropy remains a central puzzle in quantum gravity that is related to the cosmological constant problem. Within the paradigm of Holographic Naturalness, we propose that this entropy is carried by a vast number of [...] Read more.
The microscopic origin of the de Sitter entropy remains a central puzzle in quantum gravity that is related to the cosmological constant problem. Within the paradigm of Holographic Naturalness, we propose that this entropy is carried by a vast number of light, coherent degrees of freedom—called “hairons”—which emerge as the moduli of gravitational instantons on orbifolds. Starting from the Euclidean de Sitter instanton (S4), we construct a new class of orbifold gravitational instantons, S4/ZN, where N corresponds to the de Sitter entropy. We demonstrate that the dimension of the moduli space of these instantons scales linearly with N, and we identify these moduli with the hairon fields. A ZN symmetry, derived from Wilson loops in the instanton background, ensures the distinguishability of these modes, leading to the correct entropy count. The hairons acquire a mass of the order of the Hubble scale and exhibit negligible mutual interactions, suggesting that the de Sitter vacuum is a coherent state, or Bose–Einstein condensate, of these fundamental excitations. Then, we present a novel framework which unifies neutrino mass generation with the cosmological constant through gravitational topology and holography. The small neutrino mass scale emerges naturally from first principles, without requiring new physics beyond the Standard Model and Gravity. The gravitational Chern–Simons structure and its anomaly with neutrinos force a topological Higgs mechanism, leading to neutrino condensation via S4/ZN gravitational instantons. The number of topological degrees of freedom NMP2/Λ10120 provides both the holographic counting of the de Sitter entropy and a 1/Ninformation see-saw mechanism for neutrino masses. Our framework makes the following predictions: (i) a neutrino superfluid condensation forming Cooper pairs below meV energies, as a viable candidate for cold dark matter; (ii) a possible resolution of the strong CP problem through a QCD composite axion state; (iii) time-varying neutrino masses which track the evolution of dark energy; and (iv) several distinctive signatures in astroparticle physics, ultra-high-energy cosmic rays and high magnetic field experiments. Full article
21 pages, 583 KB  
Article
Beyond Accuracy: The Cognitive Economy of Trust and Absorption in the Adoption of AI-Generated Forecasts
by Anne-Marie Sassenberg, Nirmal Acharya, Padmaja Kar and Mohammad Sadegh Eshaghi
Forecasting 2026, 8(1), 8; https://doi.org/10.3390/forecast8010008 - 21 Jan 2026
Viewed by 246
Abstract
AI Recommender Systems (RecSys) function as personalised forecasting engines, predicting user preferences to reduce information overload. However, the efficacy of these systems is often bottlenecked by the “Last Mile” of forecasting: the end-user’s willingness to adopt and rely on the prediction. While the [...] Read more.
AI Recommender Systems (RecSys) function as personalised forecasting engines, predicting user preferences to reduce information overload. However, the efficacy of these systems is often bottlenecked by the “Last Mile” of forecasting: the end-user’s willingness to adopt and rely on the prediction. While the existing literature often assumes that algorithmic accuracy (e.g., low RMSE) automatically drives utilisation, empirical evidence suggests that users frequently reject accurate forecasts due to a lack of trust or cognitive friction. This study challenges the utilitarian view that users adopt systems simply because they are useful, instead proposing that sustainable adoption requires a state of Cognitive Absorption—a psychological flow state enabled by the Cognitive Economy of trust. Grounded in the Motivation–Opportunity–Ability (MOA) framework, we developed the Trust–Absorption–Intention (TAI) model. We analysed data from 366 users of a major predictive platform using Partial Least Squares Structural Equation Modelling (PLS-SEM). The Disjoint Two-Stage Approach was employed to model the reflective–formative Higher-Order Constructs. The results demonstrate that Cognitive Trust (specifically the relational dimensions of Benevolence and Integrity) operates via a dual pathway. It drives adoption directly, serving as a mechanism of Cognitive Economy where users suspend vigilance to rely on the AI as a heuristic, while simultaneously freeing mental resources to enter a state of Cognitive Absorption. Affective Trust further drives this immersion by fostering curiosity. Crucially, Cognitive Absorption partially mediates the relationship between Cognitive Trust and adoption intention, whereas it fully mediates the impact of Affective Trust. This indicates that while Cognitive Trust can drive reliance directly as a rational shortcut, Affective Trust translates to adoption only when it successfully triggers a flow state. This study bridges the gap between algorithmic forecasting and behavioural adoption. It introduces the Cognitive Economy perspective: Trust reduces the cognitive cost of verifying predictions, allowing users to outsource decision-making to the AI and enter a state of effortless immersion. For designers of AI forecasting agents, the findings suggest that maximising accuracy may be less effective than minimising cognitive friction for sustaining long-term adoption. To solve the cold start problem, platforms should be designed for flow by building emotional rapport and explainability, thereby converting sporadic users into continuous data contributors. Full article
(This article belongs to the Section AI Forecasting)
Show Figures

Figure 1

27 pages, 2766 KB  
Article
Explainable Reciprocal Recommender System for Affiliate–Seller Matching: A Two-Stage Deep Learning Approach
by Hanadi Almutairi and Mourad Ykhlef
Information 2026, 17(1), 101; https://doi.org/10.3390/info17010101 - 19 Jan 2026
Viewed by 210
Abstract
This paper presents a two-stage explainable recommendation system for reciprocal affiliate–seller matching that uses machine learning and data science to handle voluminous data and generate personalized ranking lists for each user. In the first stage, a representation learning model was trained to create [...] Read more.
This paper presents a two-stage explainable recommendation system for reciprocal affiliate–seller matching that uses machine learning and data science to handle voluminous data and generate personalized ranking lists for each user. In the first stage, a representation learning model was trained to create dense embeddings for affiliates and sellers, ensuring efficient identification of relevant pairs. In the second stage, a learning-to-rank approach was applied to refine the recommendation list based on user suitability and relevance. Diversity-enhancing reranking (maximal marginal relevance/explicit query aspect diversification) and popularity penalties were also implemented, and their effects on accuracy and provider-side diversity were quantified. Model interpretability techniques were used to identify which features affect a recommendation. The system was evaluated on a fully synthetic dataset that mimics the high-level statistics generated by affiliate platforms, and the results were compared against classical baselines (ALS, Bayesian personalized ranking) and ablated variants of the proposed model. While the reported ranking metrics (e.g., normalized discounted cumulative gain at 10 (NDCG@10)) are close to 1.0 under controlled conditions, potential overfitting, synthetic data limitations, and the need for further validation on real-world datasets are addressed. Attributions based on Shapley additive explanations were computed offline for the ranking model and excluded from the online latency budget, which was dominated by approximate nearest neighbors-based retrieval and listwise ranking. Our work demonstrates that high top-K accuracy, diversity-aware reranking, and post hoc explainability can be integrated within a single recommendation pipeline. While initially validated under synthetic evaluation, the pipeline was further assessed on a public real-world behavioral dataset, highlighting deployment challenges in affiliate–seller platforms and revealing practical constraints related to incomplete metadata. Full article
Show Figures

Figure 1

35 pages, 4290 KB  
Article
AI-Based Health Monitoring for Class I Induction Motors in Data-Scarce Environments: From Synthetic Baseline Generation to Industrial Implementation
by Duter Struwig, Jan-Hendrik Kruger, Henri Marais and Abrie Steyn
Appl. Sci. 2026, 16(2), 940; https://doi.org/10.3390/app16020940 - 16 Jan 2026
Viewed by 187
Abstract
Condition-based maintenance strategies using AI-driven health monitoring have emerged as valuable tools for industrial reliability, yet their implementation remains challenging in industries with limited operational data. Class I induction motors (≤15 kW), which power critical equipment in industries such as grain handling facilities, [...] Read more.
Condition-based maintenance strategies using AI-driven health monitoring have emerged as valuable tools for industrial reliability, yet their implementation remains challenging in industries with limited operational data. Class I induction motors (≤15 kW), which power critical equipment in industries such as grain handling facilities, represent a significant portion of industrial assets but lack established healthy vibration baselines for effective monitoring. A fundamental challenge exists in deploying AI-based health monitoring systems when no historical performance data is available, creating a ’cold-start’ problem that prevents industries from adopting predictive maintenance strategies without costly pilot programs or prolonged data collection periods. This study developed a data-driven health monitoring framework for Class I induction motors that eliminates the dependency on long-term historical trends. Through extensive experimental testing of 98 configurations on new motors, a correlation between vibration amplitude at rotational frequency and motor power rating was established, enabling the creation of a synthetic signal generation algorithm. A robust Health Index (HI) model with integrated diagnostic capabilities was developed using the JPCCED-HI framework, trained on both experimental and synthetically generated healthy vibration data to detect degradation and diagnose common failure modes. The regression analysis revealed a statistically significant relationship between motor power rating and healthy vibration signatures, enabling synthetic generation of baseline data for any Class I motor within the rated range. When implemented at an operational grain silo facility, the HI model successfully detected faulty behavior and accurately diagnosed probable failure modes in equipment with no prior monitoring history, demonstrating that maintenance decisions could be made based on condition data rather than reactive responses to failures. This framework enables immediate deployment of AI-based condition monitoring in industries lacking historical data, eliminating a major barrier to adopting predictive maintenance strategies. The synthetic data generation approach provides a cost-effective solution to the data scarcity problem identified as a critical challenge in industrial AI applications, while the successful industrial implementation validates the feasibility of this approach for small-to-medium industrial facilities. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

20 pages, 2472 KB  
Article
Filtration System for Reducing CO2 Concentration from Combustion Gases of Used Spark Ignition Engines
by Radu Tarulescu, Stelian Tarulescu, Razvan Gabriel Boboc and Mircea Nastasoiu
Vehicles 2026, 8(1), 19; https://doi.org/10.3390/vehicles8010019 - 15 Jan 2026
Viewed by 206
Abstract
This research paper proposes a solution to reduce CO2 emissions from a spark ignition engine’s exhaust gases by installing a filtration system on the vehicle’s exhaust pipe. The analyzed filtration system was not patented and was in the testing stage. Tests will [...] Read more.
This research paper proposes a solution to reduce CO2 emissions from a spark ignition engine’s exhaust gases by installing a filtration system on the vehicle’s exhaust pipe. The analyzed filtration system was not patented and was in the testing stage. Tests will also be carried out on the stand. The tested system can be used to reduce CO2 levels in automotive exhaust gases and for static applications (generators, internal combustion engine test stands, fossil fuel power generation systems). The need for a system to reduce pollutant emissions emerged with the average age in Europe. In proper conditions, some vehicles can use this type of filtration system. The tested vehicle is a vehicle (produced in 2009) equipped with a 75HP Spark Ignition Engine. The CO2 filtration system consists of a container containing a reactive aqueous solution comprising water, CaO, and MgO. Four tests were performed: the first without a filter, and the other three with the filter placed at different distances from the exhaust pipe end to the reactive solution surface. The tests consisted of evaluating the exhaust gases from the cold start of the engine and running (idle engine speed) until the engine reached the optimal operating temperature. The test procedure involved saving the data collected by the analyzer every 10 s for each of the four tests performed (the duration of a test was 1050 s). The first test (No. 1) was performed without the use of the filtering system. Tests 2, 3, and 4 were carried out using the filtering system and changing the distance between the exhaust gases’ outlet point and the surface of the aqueous substance. All tests were carried out under similar conditions. Data specific to the test of engines were collected—emissions (CO2, CO, NOx), ambient temperature, and exhaust temperature. The tests were analyzed and compared, and the highest CO2 reductions without increases in CO or NOx were observed in Tests 3 and 4. Based on the detailed analysis of the values obtained from the four tests, the system was efficient. The tests will continue on experimental engines from test stands, to develop a prototype filter for primarily static applications with internal combustion engines: test stands for engines and generators, and, after homologation, directly on vehicles. The paper aims to partially solve an important problem—reducing the level of CO2 from the exhaust gases. The presented solution may have applicability in the automotive industry but is also feasible for static applications. Another objective is to reduce emissions from older vehicles, which are widespread in certain regions of Europe and worldwide. Full article
(This article belongs to the Special Issue Intelligent Mobility and Sustainable Automotive Technologies)
Show Figures

Figure 1

24 pages, 2596 KB  
Article
KnoChain: Knowledge-Aware Recommendation for Alleviating Cold Start in Sustainable Procurement
by Peijia Li, Yue Ma, Kunqi Hou and Shipeng Li
Sustainability 2026, 18(1), 506; https://doi.org/10.3390/su18010506 - 4 Jan 2026
Viewed by 307
Abstract
When new purchasers or products are added in the supply chain management system, the recommendation system will face severe challenges of data sparsity and cold start. A knowledge graph that can enrich the representations of both procurement managers and products offers a promising [...] Read more.
When new purchasers or products are added in the supply chain management system, the recommendation system will face severe challenges of data sparsity and cold start. A knowledge graph that can enrich the representations of both procurement managers and products offers a promising pathway to mitigate the challenges. This paper proposes a knowledge-aware recommendation network for supply chain management, called KnoChain. The proposed model refines purchaser representations through outward propagation along knowledge graph links and enhances product representations via inward aggregation of multi-hop neighbourhood information. This dual approach enables the simultaneous discovery of purchasers’ latent preferences and products’ underlying characteristics, facilitating precise and personalised recommendations. Extensive experiments on three real-world datasets demonstrate that the proposed method consistently outperforms several state-of-the-art baselines, achieving average AUC improvements of 9.36%, 5.91%, and 8.81%, and average accuracy gains of 8.56%, 6.27%, and 8.67% on the movie, book, and music datasets, respectively. These results underscore the model’s potential to enhance recommendation robustness in supply chain management. The KnoChain framework proposed in this article combines purchaser-aware attention with knowledge graphs to improve the accuracy of purchaser SKU matching. The method can help enhance supply chain resilience and reduce returns caused by over-ordering, inventory backlog, and incorrect procurement. In addition, the model provides interpretable recommendation paths based on the knowledge graph, which improves trust and auditability for procurement personnel and helps balance environmental and operational costs. Full article
Show Figures

Figure 1

15 pages, 1182 KB  
Article
Enhanced Recommender System with Sentiment Analysis of Review Text and SBERT Embeddings of Item Descriptions
by Doyeon Lim and Taemin Lee
Mathematics 2026, 14(1), 184; https://doi.org/10.3390/math14010184 - 3 Jan 2026
Viewed by 424
Abstract
As a transition from offline to online shopping is taking place in many societies, many studies have been conducted to align products with user preferences. However, the existing collaborative filtering technology has a small number of user–item interactions, resulting in data sparsity and [...] Read more.
As a transition from offline to online shopping is taking place in many societies, many studies have been conducted to align products with user preferences. However, the existing collaborative filtering technology has a small number of user–item interactions, resulting in data sparsity and cold start problems. This study proposes a recommendation system that combines customer preference for an item with quantitative indicators. To this end, the Amazon dataset is used to quantify an item’s attribute information through Sentence-BERT, and emotion analysis of the review data is performed. The model proposed in this study simultaneously utilizes the attribute information and review data of an item, proving that it provides higher performance than when using review text alone. Finally, we verified that our approach significantly outperforms traditional baseline models and rating predictions and effectively improves top K recommendation indicators. In addition, ablation studies found that integrating item attributes and review emotions performs better than using them individually. This means that the complementary synthesis of objective item meanings and subjective user emotions can model user preferences more accurately, enabling personalized recommendations. Full article
Show Figures

Figure 1

17 pages, 540 KB  
Article
Research on Imitation–Reinforcement Hybrid Machine Learning Algorithms: Application in Path Planning
by Linsong Zhang and Xiaohui Yan
Mathematics 2026, 14(1), 161; https://doi.org/10.3390/math14010161 - 31 Dec 2025
Viewed by 442
Abstract
Path planning in complex, dynamic environments presents a significant challenge. Deep Reinforcement Learning (DRL) offers an end-to-end solution but suffers from critical sample inefficiency and a “cold-start” problem. Imitation Learning (IL) accelerates training but is constrained by a performance ceiling and poor generalization. [...] Read more.
Path planning in complex, dynamic environments presents a significant challenge. Deep Reinforcement Learning (DRL) offers an end-to-end solution but suffers from critical sample inefficiency and a “cold-start” problem. Imitation Learning (IL) accelerates training but is constrained by a performance ceiling and poor generalization. To address these limitations, we propose a novel Imitation–Reinforcement Hybrid Machine Learning Algorithm (Hybrid IL-RL). This framework balances exploration and performance via a two-stage process: First, an offline pre-training phase uses Behavioral Cloning (BC) with “non-expert” A* data from static environments for a “warm start”. Second, an online fine-tuning phase uses a DRL algorithm (SAC) to adapt this policy in complex, dynamic environments, allowing the agent to surpass the teacher’s limitations. Simulation experiments validate the approach. The framework demonstrates significantly faster convergence than DRL algorithms trained from scratch. Most critically, in the dynamic environment, our Hybrid IL-RL algorithm achieved the highest success rate (82.4%), while pure IL methods (BC, GAIL) failed due to poor generalization (e.g., 82.1% collision rate) and pure DRL methods struggled (approx. 51–56% success rate). Our results confirm the hybrid framework effectively solves the cold-start problem while using DRL to break the IL performance ceiling. Full article
Show Figures

Figure 1

22 pages, 1236 KB  
Article
An Industrial Framework for Cold-Start Recommendation in Few-Shot and Zero-Shot Scenarios
by Xulei Cao, Wenyu Zhang, Feiyang Jiang and Xinming Zhang
Information 2025, 16(12), 1105; https://doi.org/10.3390/info16121105 - 15 Dec 2025
Viewed by 805
Abstract
With the rise of online advertising, e-commerce industries, and new media platforms, recommendation systems have become an essential product form that connects users with a vast number of candidates. A major challenge in recommendation systems is the cold-start problem, where the absence of [...] Read more.
With the rise of online advertising, e-commerce industries, and new media platforms, recommendation systems have become an essential product form that connects users with a vast number of candidates. A major challenge in recommendation systems is the cold-start problem, where the absence of historical interaction data for new users and items leads to poor recommendation performance. We first analyze the causes of the cold-start problem, highlighting the limitations of existing embedding models when faced with a lack of interaction data. To address this, we classify the features of models into three categories, leveraging the Trans Block mapping to transfer features into the semantic space of missing features. Then, we propose a model-agnostic industrial framework (MAIF) with the Auto-Selection serving mechanism to address the cold-start recommendation problem in few-shot and zero-shot scenarios without requiring training from scratch. This framework can be applied to various online models without altering the prediction for warm entities, effectively avoiding the “seesaw phenomenon” between cold and warm entities. It improves prediction accuracy and calibration performance in three cold-start scenarios of recommendation systems. Finally, both the offline experiments on real-world industrial datasets and the online advertising system on the Dazhong Dianping app validate the effectiveness of our approach, showing significant improvements in recommendation performance for cold-start scenarios. Full article
Show Figures

Figure 1

32 pages, 1895 KB  
Article
A Hybrid AI-Stochastic Framework for Predicting Dynamic Labor Productivity in Sustainable Repetitive Construction Activities
by Naif Alsanabani, Khalid Al-Gahtani, Ayman Altuwaim and Abdulrahman Bin Mahmoud
Sustainability 2025, 17(24), 11097; https://doi.org/10.3390/su172411097 - 11 Dec 2025
Viewed by 404
Abstract
Accurate real-time prediction of labor productivity is crucial for the successful management of construction projects. However, it remains a significant challenge due to the dynamic and uncertain nature of construction environments. Existing models, while valuable for planning and post-analysis, often rely on historical [...] Read more.
Accurate real-time prediction of labor productivity is crucial for the successful management of construction projects. However, it remains a significant challenge due to the dynamic and uncertain nature of construction environments. Existing models, while valuable for planning and post-analysis, often rely on historical data and static assumptions, rendering them inadequate for providing actionable, real-time insights during construction. This study addresses this gap by suggesting a novel hybrid AI-stochastic framework that integrates a Long Short-Term Memory (LSTM) network with Markov Chain modeling for dynamic productivity forecasting in repetitive construction activities. The LSTM component captures complex, long-term temporal dependencies in productivity data, while the Markov Chain models probabilistic state transitions (Low, Medium, High productivity) to account for inherent volatility and uncertainty. A key innovation is the use of a Bayesian-adjusted Transition Probability Matrix (TPM) to mitigate the “cold start” problem in new projects with limited initial data. The framework was rigorously validated across four distinct case studies, demonstrating robust performance with Mean Absolute Percentage Error (MAPE) values predominantly in the “Good” range (10–20%) for both the training and test datasets. A comprehensive sensitivity analysis further revealed the model’s stability under data perturbations, though performance varied with project characteristics. By enabling more efficient resource utilization and reducing project delays, the proposed framework contributes directly to sustainable construction practices. The model’s ability to provide accurate real-time predictions helps minimize material waste, reduce unnecessary labor costs, optimize equipment usage, and decrease the overall environmental impact of construction projects. Full article
Show Figures

Figure 1

34 pages, 1102 KB  
Article
Personalized Course Recommendations Leveraging Machine and Transfer Learning Toward Improved Student Outcomes
by Shrooq Algarni and Frederick T. Sheldon
Mach. Learn. Knowl. Extr. 2025, 7(4), 138; https://doi.org/10.3390/make7040138 - 5 Nov 2025
Viewed by 1200
Abstract
University advising at matriculation must operate under strict information constraints, typically without any post-enrolment interaction history.We present a unified, leakage-free pipeline for predicting early dropout risk and generating cold-start programme recommendations from pre-enrolment signals alone, with an optional early-warning variant incorporating first-term academic [...] Read more.
University advising at matriculation must operate under strict information constraints, typically without any post-enrolment interaction history.We present a unified, leakage-free pipeline for predicting early dropout risk and generating cold-start programme recommendations from pre-enrolment signals alone, with an optional early-warning variant incorporating first-term academic aggregates. The approach instantiates lightweight multimodal architectures: tabular RNNs, DistilBERT encoders for compact profile sentences, and a cross-attention fusion module evaluated end-to-end on a public benchmark (UCI id 697; n = 3630 students across 17 programmes). For dropout, fusing text with numerics yields the strongest thresholded performance (Hybrid RNN–DistilBERT: f1-score ≈ 0.9161, MCC ≈ 0.7750, and simple ensembling modestly improves threshold-free discrimination (Area Under Receiver Operating Characteristic Curve (AUROC) up to ≈0.9488). A text-only branch markedly underperforms, indicating that numeric demographics and early curricular aggregates carry the dominant signal at this horizon. For programme recommendation, pre-enrolment demographics alone support actionable rankings (Demographic Multi-Layer Perceptron (MLP): Normalized Discounted Cumulative Gain @ 10 (NDCG@10) ≈ 0.5793, Top-10 ≈ 0.9380, exceeding a popularity prior by 2527 percentage points in NDCG@10); adding text offers marginal gains in hit rate but not in NDCG on this cohort. Methodologically, we enforce leakage guards, deterministic preprocessing, stratified splits, and comprehensive metrics, enabling reproducibility on non-proprietary data. Practically, the pipeline supports orientation-time triage (high-recall early-warning) and shortlist generation for programme selection. The results position matriculation-time advising as a joint prediction–recommendation problem solvable with carefully engineered pre-enrolment views and lightweight multimodal models, without reliance on historical interactions. Full article
Show Figures

Figure 1

29 pages, 23797 KB  
Article
Tone Mapping of HDR Images via Meta-Guided Bayesian Optimization and Virtual Diffraction Modeling
by Deju Huang, Xifeng Zheng, Jingxu Li, Ran Zhan, Jiachang Dong, Yuanyi Wen, Xinyue Mao, Yufeng Chen and Yu Chen
Sensors 2025, 25(21), 6577; https://doi.org/10.3390/s25216577 - 25 Oct 2025
Cited by 1 | Viewed by 1119
Abstract
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase [...] Read more.
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase modulation, enabling the precise control of image details and contrast. In parallel, we apply the Stevens power law to simulate the nonlinear luminance perception of the human visual system, thereby adjusting the overall brightness distribution of the HDR image and improving the visual experience. Unlike existing methods that primarily emphasize structural fidelity, the proposed method strikes a balance between perceptual fidelity and visual naturalness. Secondly, an adaptive parameter tuning system based on Bayesian optimization is developed to conduct optimization of the Tone Mapping Quality Index (TMQI), quantifying uncertainty using probabilistic models to approximate the global optimum with fewer evaluations. Furthermore, we propose a task-distribution-oriented meta-learning framework: a meta-feature space based on image statistics is constructed, and task clustering is combined with a gated meta-learner to rapidly predict initial parameters. This approach significantly enhances the robustness of the algorithm in generalizing to diverse HDR content and effectively mitigates the cold-start problem in the early stage of Bayesian optimization, thereby accelerating the convergence of the overall optimization process. Experimental results demonstrate that the proposed method substantially outperforms state-of-the-art tone-mapping algorithms across multiple benchmark datasets, with an average improvement of up to 27% in naturalness. Furthermore, the meta-learning-guided Bayesian optimization achieves two- to five-fold faster convergence. In the trade-off between computational time and performance, the proposed method consistently dominates the Pareto frontier, achieving high-quality results and efficient convergence with a low computational cost. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

33 pages, 2812 KB  
Article
A Symmetry-Aware Predictive Framework for Olympic Cold-Start Problems and Rare Events Based on Multi-Granularity Transfer Learning and Extreme Value Analysis
by Yanan Wang, Yi Fei and Qiuyan Zhang
Symmetry 2025, 17(11), 1791; https://doi.org/10.3390/sym17111791 - 23 Oct 2025
Viewed by 816
Abstract
This paper addresses the cold-start problem and rare event prediction challenges in Olympic medal forecasting by proposing a predictive framework that integrates multi-granularity transfer learning with extreme value theory. The framework comprises two main components, a Multi-Granularity Transfer Learning Core (MG-TLC) and a [...] Read more.
This paper addresses the cold-start problem and rare event prediction challenges in Olympic medal forecasting by proposing a predictive framework that integrates multi-granularity transfer learning with extreme value theory. The framework comprises two main components, a Multi-Granularity Transfer Learning Core (MG-TLC) and a Rare Event Analysis Module (RE-AM), which address multi-level prediction for data-scarce countries and first medal prediction tasks. The MG-TLC incorporates two key components: Dynamic Feature Space Reconstruction (DFSR) and the Hierarchical Adaptive Transfer Strategy (HATS). The RE-AM combines a Bayesian hierarchical extreme value model (BHEV) with piecewise survival analysis (PSA). Experiments based on comprehensive, licensed Olympic data from 1896–2024, where the framework was trained on data up to 2016, validated on the 2020 Games, and tested by forecasting the 2024 Games, demonstrate that the proposed framework significantly outperforms existing methods, reducing MAE by 25.7% for data-scarce countries and achieving an AUC of 0.833 for first medal prediction, 14.3% higher than baseline methods. This research establishes a foundation for predicting the 2028 Los Angeles Olympics and provides new approaches for cold-start and rare event prediction, with potential applicability to similar challenges in other data-scarce domains such as economics or public health. From a symmetry viewpoint, our framework is designed to preserve task-relevant invariances—permutation invariance in set-based country aggregation and scale robustness to macro-covariate units—via distributional alignment between data-rich and data-scarce domains and Olympic-cycle indexing. We treat departures from these symmetries (e.g., host advantage or event-program changes) as structured asymmetries and capture them with a rare event module that combines extreme value and survival modeling. Full article
(This article belongs to the Special Issue Applications Based on Symmetry in Machine Learning and Data Mining)
Show Figures

Figure 1

27 pages, 761 KB  
Article
A Novel Framework Leveraging Social Media Insights to Address the Cold-Start Problem in Recommendation Systems
by Enes Celik and Sevinc Ilhan Omurca
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 234; https://doi.org/10.3390/jtaer20030234 - 2 Sep 2025
Cited by 1 | Viewed by 2522
Abstract
In today’s world, with rapidly developing technology, it has become possible to perform many transactions over the internet. Consequently, providing better service to online customers in every field has become a crucial task. These advancements have driven companies and sellers to recommend tailored [...] Read more.
In today’s world, with rapidly developing technology, it has become possible to perform many transactions over the internet. Consequently, providing better service to online customers in every field has become a crucial task. These advancements have driven companies and sellers to recommend tailored products to their customers. Recommendation systems have emerged as a field of study to ensure that relevant and suitable products can be presented to users. One of the major challenges in recommendation systems is the cold-start problem, which arises when there is insufficient information about a newly introduced user or product. To address this issue, we propose a novel framework that leverages implicit behavioral insights from users’ X social media activity to construct personalized profiles without requiring explicit user input. In the proposed model, users’ behavioral profiles are first derived from their social media data. Then, recommendation lists are generated to address the cold-start problem by employing Boosting algorithms. The framework employs six boosting algorithms to classify user preferences for the top 20 most-rated films on Letterboxd. In this way, a solution is offered without requiring any additional external data beyond social media information. Experiments on a dataset demonstrate that CatBoost outperforms other methods, achieving an F1-score of 0.87 and MAE of 0.21. Based on experimental results, the proposed system outperforms existing methods developed to solve the cold-start problem. Full article
Show Figures

Figure 1

Back to TopTop