Next Article in Journal
U2-LFOR: A Two-Stage U2 Network for Light-Field Occlusion Removal
Previous Article in Journal
A Novel Method for Virtual Real-Time Cumuliform Fluid Dynamics Simulation Using Deep Recurrent Neural Networks
Previous Article in Special Issue
Decentralized Energy Swapping for Sustainable Wireless Sensor Networks Using Blockchain Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stock Price Prediction Using FinBERT-Enhanced Sentiment with SHAP Explainability and Differential Privacy

by
Linyan Ruan
and
Haiwei Jiang
*
School of International Trade and Economics, Central University of Finance and Economics, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(17), 2747; https://doi.org/10.3390/math13172747
Submission received: 15 July 2025 / Revised: 4 August 2025 / Accepted: 11 August 2025 / Published: 26 August 2025

Abstract

Stock price forecasting remains a central challenge in financial modeling due to the non-stationarity, noise, and high dimensionality of market dynamics, as well as the growing importance of unstructured textual information. In this work, we propose a multimodal prediction framework that combines FinBERT-based financial sentiment extraction with technical and statistical indicators to forecast short-term stock price movement. Contextual sentiment signals are derived from financial news headlines using FinBERT, a domain-specific transformer model fine-tuned on annotated financial text. These signals are aggregated and fused with price- and volatility-based features, forming the input to a gradient-boosted decision tree classifier (XGBoost). To ensure interpretability, we employ SHAP (SHapley Additive exPlanations), which decomposes each prediction into additive feature attributions while satisfying game-theoretic fairness axioms. In addition, we integrate differential privacy into the training pipeline to ensure robustness against membership inference attacks and protect proprietary or client-sensitive data. Empirical evaluations across multiple S&P 500 equities from 2018–2023 demonstrate that our FinBERT-enhanced model consistently outperforms both technical-only and lexicon-based sentiment baselines in terms of AUC, F1-score, and simulated trading profitability. SHAP analysis confirms that FinBERT-derived features rank among the most influential predictors. Our findings highlight the complementary value of domain-specific NLP and privacy-preserving machine learning in financial forecasting, offering a principled, interpretable, and deployable solution for real-world quantitative finance applications.

1. Introduction

Stock price prediction remains a fundamental and extensively studied challenge in financial machine learning. The task is complicated by the non-stationarity, noise, and nonlinear dependencies that characterize financial time series. Traditional econometric models, such as ARIMA, GARCH, and their multivariate extensions, offer interpretable statistical structures, but are often unable to capture complex cross-feature interactions and regime-dependent behaviors. With the proliferation of computational resources and large-scale data availability, machine learning models—particularly ensemble methods and deep neural networks—have emerged as competitive alternatives. Among these, gradient boosting decision trees (GBDTs) such as XGBoost have shown strong empirical performance in financial forecasting tasks due to their robustness, generalization capacity, and ability to model complex nonlinearities [1,2]. However, while these models improve predictive accuracy, they often lack transparency and typically rely on numerical signals alone, omitting valuable unstructured information such as financial news, analyst commentary, and earnings announcements.
Financial text sources encode rich semantic information that reflects investor expectations, market sentiment, and forward-looking beliefs. Extracting this information, however, is a non-trivial task due to lexical ambiguity, domain-specific terminology, and subtle sentiment cues present in financial narratives. Generic sentiment analysis tools (e.g., VADER, TextBlob) are ill-suited for this purpose, as they are trained on social media or general-purpose corpora. Words like “depreciation,” “liability,” or “exposure” may carry negative connotations in everyday language, but represent neutral or even positive signals in a financial context. Consequently, domain-adapted language models have become a focal point in financial NLP research. FinBERT [3], a BERT-based transformer model pre-trained on financial texts, is designed to address these challenges by capturing context-specific sentiment and reducing misclassification in professional finance documents.
Despite improvements in sentiment extraction, integrating such features into predictive models remains challenging, particularly in terms of robustness and interpretability. Deep neural networks and other black-box models often provide limited insight into their decision-making process—an unacceptable limitation in financial domains governed by compliance, fiduciary responsibility, and regulatory transparency. To address this, we incorporate SHAP (SHapley Additive exPlanations) [4], a game-theoretic interpretability framework that assigns each feature an additive importance score. SHAP allows us to decompose predictions into constituent drivers (e.g., sentiment, volatility) and trace the logic behind each forecast. This is essential for both model validation and informed, auditable decision-making.
We propose a multimodal framework that fuses FinBERT-enhanced sentiment features with classical technical indicators and price–volume signals for next-day directional stock movement prediction. Sentiment signals are extracted from time-aligned financial news headlines and transformed into structured features (mean, max, dispersion), which are then integrated into an XGBoost classifier. Our choice of XGBoost is motivated not only by its strong predictive performance in financial contexts [5,6], but also by its native compatibility with TreeSHAP, which enables transparent, fine-grained explanations of model behavior—a critical requirement in real-world financial applications.
Our model is trained and evaluated across multiple equities (e.g., AAPL, MSFT, TSLA, JPM) and diverse market regimes (e.g., pre-COVID, COVID crash, post-COVID recovery), ensuring both statistical and economic robustness.
The key contributions of this work are as follows:
1.
We develop a modular, interpretable framework for short-term stock prediction that integrates structured technical indicators with unstructured financial sentiment derived using FinBERT, a domain-specific transformer model.
2.
We conduct extensive empirical evaluation across multiple assets and temporal regimes, demonstrating that FinBERT sentiment features substantially improve classification accuracy, AUC, and simulated trading performance over both traditional and lexicon-based baselines.
3.
We employ SHAP for feature attribution, revealing that FinBERT-derived features consistently rank among the most influential predictors, and that their importance varies in intuitive ways across volatility regimes and event-driven periods such as earnings announcements.
4.
We assess the generalization capacity of the model via cross-sectional and temporal experiments, showing that FinBERT-enhanced signals are resilient to market regime shifts and lead to more stable predictive behavior across diverse conditions.

2. Related Work

2.1. Stock Price Prediction with Machine Learning

Stock market forecasting has historically relied on econometric models such as autoregressive integrated moving average and generalized autoregressive conditional heteroskedasticity. While these models provide interpretable structures, they are inherently limited in capturing nonlinear dependencies and interactions among heterogeneous data modalities. In contrast, machine learning approaches, particularly ensemble-based models and deep learning architectures, have shown superior performance in capturing complex, high-dimensional patterns from financial data [7,8].
Gradient Boosting Machines (GBMs), especially XGBoost [9], have gained widespread adoption in financial modeling due to their robustness to multicollinearity, ability to handle missing data, and superior generalization performance. Recent work has demonstrated the utility of GBMs in forecasting short-term equity returns by integrating technical indicators, macroeconomic signals, and order book features [10]. However, the incorporation of textual data—particularly news sentiment—remains an open challenge due to the noisy and unstructured nature of financial language.

2.2. Financial Sentiment Analysis

Sentiment analysis in the financial domain is substantially different from general-purpose natural language processing (NLP) due to domain-specific jargon, context-dependent polarity, and the prevalence of subtle linguistic cues (e.g., hedging, speculation) [11,12,13]. Early efforts used dictionary-based approaches such as the Loughran–McDonald sentiment lexicon [14], which identifies domain-specific positive and negative words. However, such methods suffer from limited contextual awareness and low precision.
The advent of pre-trained language models, particularly those based on the Transformer architecture [15], has revolutionized NLP applications in finance. FinBERT, a domain-adapted version of BERT [16] trained on the Financial PhraseBank, has demonstrated state-of-the-art performance in sentence-level sentiment classification for financial texts. FinBERT captures both semantic context and syntactic structure, making it well-suited for analyzing earnings reports, analyst statements, and news headlines. Recent empirical studies have confirmed that FinBERT-based sentiment signals can enhance the predictive accuracy of trading strategies [17,18].

2.3. Explainable AI in Finance

Despite the effectiveness of complex ML models, their black-box nature has raised concerns regarding transparency, accountability, and regulatory compliance in financial contexts. Explainable AI (XAI) seeks to make model predictions interpretable to human stakeholders without sacrificing predictive accuracy. Among various XAI approaches, SHAP (SHapley Additive exPlanations) [19] has emerged as a principled method grounded in cooperative game theory, offering consistent and locally accurate feature attributions.
In the finance literature, SHAP has been used to dissect credit scoring models, assess risk factor contributions in asset pricing, and interpret algorithmic trading signals. However, limited work has explored the integration of SHAP with models that incorporate NLP-derived sentiment features, particularly those obtained via FinBERT. Our work addresses this gap by jointly leveraging SHAP and FinBERT to produce interpretable stock price prediction models that combine structured and unstructured data.

2.4. Multimodal Approaches to Financial Forecasting

Recent advances have explored the fusion of multimodal data sources—technical indicators, textual news, earnings call transcripts, and social media signals—for enhanced financial forecasting [20,21]. Multimodal learning frameworks such as those proposed in [22,23] integrate both numeric and linguistic modalities using hierarchical attention networks or cross-modal transformers. While such models are expressive, they often suffer from reduced interpretability and high data requirements [24].
Our approach differs in that we maintain model interpretability by utilizing structured inputs (technical indicators) and FinBERT-derived sentiment scores—eschewing raw text embeddings—in conjunction with an interpretable ensemble model (XGBoost). This architecture strikes a balance between predictive performance and explainability, making it practical for real-world deployment.
Recent research into financial sentiment modeling has evolved from traditional lexicon-based methods to advanced transformer-based architectures. Early approaches leveraged domain-specific dictionaries such as the Loughran–McDonald financial sentiment lexicon, and they remain widely used due to their interpretability and tailored financial vocabulary. However, these methods often struggle with contextual ambiguity and syntactic nuances in financial texts [25]. Transformer-based models like FinBERT and FinancialBERT have addressed these limitations as they are pre-trained on large-scale financial corpora, enabling them to capture deeper contextual dependencies and domain-specific semantics. To position our work within this trajectory, we also draw on recent systematic literature reviews (SLRs) that synthesize developments in financial NLP. For instance, Du et al. [26] review trends in sentiment-driven forecasting models, while Mishev et al. [27] provide a comprehensive taxonomy of deep learning techniques applied to financial text analytics. These reviews highlight the growing emphasis on explainability and multimodal integration—key aspects addressed in our proposed framework.

3. Preliminaries

This section outlines the fundamental concepts underlying our proposed framework, including (i) short-term stock price movement prediction as a supervised learning task, (ii) domain-specific sentiment extraction via FinBERT, and (iii) model interpretation using SHAP (SHapley Additive exPlanations).

3.1. Stock Price Movement Prediction

Let P close ( t ) denote the adjusted closing price of a given stock on trading day t. The predictive task is to forecast the direction of price movement on day t + 1 , based on features available up to and including on day t.
Definition 1
(Directional Label). We define the binary target variable y ( t ) { 0 , 1 } for each day t as follows:
y ( t ) = 1 , if P close ( t + 1 ) > P close ( t ) 0 , otherwise
This formulation corresponds to a next-day directional forecasting objective, which is commonly used in high-frequency trading and signal-based portfolio strategies. Let x ( t ) R d denote the feature vector derived from both technical and textual signals at time t. The goal is to learn a function f θ : R d [ 0 , 1 ] parameterized by θ , where f θ ( x ( t ) ) approximates P ( y ( t ) = 1 x ( t ) ) . Models used in this setting include tree ensembles (e.g., XGBoost), logistic regression, and neural networks, with XGBoost chosen in our framework for its high performance and compatibility with SHAP.
We note that modeling next-day directional movement omits return magnitude, which is relevant for profitability estimation and portfolio construction. This formulation was deliberately chosen to isolate the marginal predictive value of sentiment signals while maintaining interpretability and consistent evaluation across different markets.

3.2. Domain-Specific Sentiment Analysis with FinBERT

In the context of financial forecasting, textual sentiment serves as a proxy for market expectations and investor behavior. However, general-purpose sentiment classifiers often misinterpret domain-specific vocabulary. FinBERT addresses this issue by fine-tuning the BERT (Bidirectional Encoder Representations from Transformers) architecture on the Financial PhraseBank corpus, which consists of expert-annotated financial sentences.
Definition 2
(FinBERT Sentiment Output). Given a tokenized text input h T , where T is the space of token sequences, FinBERT outputs a probability vector over three sentiment classes:
FinBERT ( h ) = P pos ( h ) , P neu ( h ) , P neg ( h ) Δ 2
where Δ 2 denotes the 2-simplex in R 3 .
Definition 3
(Scalarized Sentiment Score). We define a continuous sentiment score for a headline h as
s ( h ) = P pos ( h ) P neg ( h )
For each trading day t, we aggregate sentiment scores across multiple headlines { h i ( t ) } i = 1 n t using statistical functions such as the mean, maximum, and standard deviation:
μ t = 1 n t i = 1 n t s ( h i ( t ) ) , σ t = 1 n t i = 1 n t s ( h i ( t ) ) μ t 2
These aggregated values form the sentiment component of the feature vector x ( t ) used for prediction.

3.3. Explainable Machine Learning with SHAP

Modern ensemble models such as XGBoost often operate as black boxes, limiting their usefulness in domains like finance where decision transparency is critical. SHAP [4] provides a unified, game-theoretic framework for interpreting model outputs via additive feature attributions.
Definition 4
(SHAP Decomposition). Let f : R d R be a trained model and x R d be an input instance. The SHAP framework represents the model output as
f ( x ) = ϕ 0 + j = 1 d ϕ j ( x )
where ϕ 0 = E x [ f ( x ) ] is the expected output over the data distribution, and  ϕ j ( x ) is the Shapley value corresponding to feature j.
Remark 1.
SHAP values satisfy the following desirable axioms: (i) efficiency, meaning the sum of all feature contributions equals the output difference from the baseline; (ii) symmetry, where equally contributing features receive equal attributions; (iii) nullity, assigning zero importance to non-influential features; and (iv) linearity, ensuring additive consistency across models.
In practice, we compute SHAP values using the TreeSHAP algorithm, which allows efficient exact computation for tree ensemble models such as XGBoost. SHAP enables both global interpretability (via average absolute contributions across the dataset) and local interpretability (per-instance feature attribution), supporting robust and transparent deployment of machine learning models in finance.

4. Methodology

4.1. Data Acquisition and Preprocessing

Let S = { s i } i = 1 N denote the universe of publicly traded equity securities considered in this study, where each s i S corresponds to a unique S&P 500 constituent. For each asset s i , we define a multimodal time series dataset consisting of structured market data and unstructured textual news data over a temporal horizon t = 1 , , T , where t indexes trading days aligned with the U.S. equity market calendar.
Let H s i ( t ) = { h s i , j ( t ) } j = 1 n t denote the set of textual headlines associated with asset s i on day t, where each h s i , j ( t ) T is a headline represented as a raw text string or tokenized sequence. Headlines are sourced from reputable financial news providers via licensed aggregators or APIs. We define T s i ( t ) = { τ s i , j ( t ) } j = 1 n t as the corresponding set of publication timestamps associated with each headline.
To ensure strict temporal consistency and eliminate forward-looking bias, we define an admissible headline set for time t as
H ˜ s i ( t ) : = h s i , j ( t ) H s i ( t ) : τ s i , j ( t ) τ close ( t )
where τ close ( t ) denotes the market close time (typically 16:00 ET) on trading day t. Headlines published post-close (i.e., τ s i , j ( t ) > τ close ( t ) ) are deferred to day t + 1 and excluded from the feature construction at time t to prevent temporal leakage. This filtration ensures the measurability of feature vectors with respect to F s i ( t ) .

Time Alignment and Feature Construction

For each day t, we aggregate headline-level sentiment signals { s ( h s i , j ( t ) ) } derived from FinBERT (as defined in Section 2.2) into daily summary statistics:
s ¯ s i ( t ) = 1 | H ˜ s i ( t ) | h H ˜ s i ( t ) s ( h ) , σ s i ( t ) = 1 | H ˜ s i ( t ) | h H ˜ s i ( t ) ( s ( h ) s ¯ s i ( t ) ) 2
These aggregated statistics (mean, standard deviation, and maximum sentiment) are then concatenated with the structured feature vector x s i ( t ) to produce the full multimodal representation x s i ( t ) R d , where d = d + m and m is the number of derived sentiment features. Figure 1 shows the rolling time-aligned data flow used for evaluation.
In particular, we handle missing values in either structured or textual modalities (e.g., due to sparse news coverage or market holidays) using forward-fill interpolation:
x s i , k ( t ) = x s i , k ( t 1 ) , if x s i , k ( t ) = NaN x s i , k ( t ) , otherwise
for all feature dimensions k { 1 , , d } . This assumes weak temporal stationarity in the absence of new observations. Alternatively, entire rows with missing structured values may be masked during training to preserve data fidelity under stricter modeling assumptions. Finally, to ensure numerical stability and avoid scale bias during model training, we apply z-score normalization to all continuous features:
x ^ s i , k ( t ) = x s i , k ( t ) μ k σ k
where μ k and σ k are the empirical mean and standard deviation of feature k computed on the training subset only. Normalization statistics are held constant across all test folds to prevent data leakage.
To address the potential bias introduced by treating all news sources equally, we recognize the need for incorporating source credibility and market influence into the headline weighting process. Not all financial news outlets exert equal impact on investor behavior; for example, headlines from Bloomberg or Reuters may carry greater informational weight than those from less-followed platforms. In future iterations of the model, we plan to assign differential weights to headlines based on source reputation, historical market response, or citation frequency in institutional reports. Integrating such source-aware weighting could improve the fidelity of sentiment signals and enhance predictive performance, particularly in periods of high information asymmetry.
In addition, to evaluate the robustness of our model under varying market conditions, we partition the study period into four major temporal regimes: Pre-COVID, COVID Crash, Post-COVID Recovery, and the Inflation & Rate Hikes era. These partitions, illustrated in Figure 2, were chosen to reflect structurally different market environments characterized by shifts in volatility, sentiment polarity, and macroeconomic drivers. We assess performance separately within each regime to ensure that the model maintains predictive consistency and interpretability across both crisis and expansionary periods.

4.2. Sentiment Quantification via FinBERT

To extract quantitative sentiment signals from financial news headlines, we apply a domain-specific transformer model, FinBERT, defined as the mapping f FinBERT : T Δ 2 , where T denotes the space of tokenized text sequences, and  Δ 2 = { ( p + , p 0 , p ) [ 0 , 1 ] 3 p + + p 0 + p = 1 } represents the probability simplex over the sentiment classes {positive, neutral, negative}. For each headline h s i , j ( t ) H s i ( t ) , FinBERT yields a posterior sentiment distribution:
f FinBERT h s i , j ( t ) = p s i , j + ( t ) , p s i , j 0 ( t ) , p s i , j ( t ) .
We transform these probabilistic outputs into a scalar sentiment score via an affine mapping. Specifically, we define the scalarized sentiment score as
s ˜ s i , j ( t ) : = α p s i , j + ( t ) β p s i , j ( t ) ,
where α , β R > 0 are hyperparameters controlling asymmetry in optimism versus pessimism encoding. This generalization allows for the incorporation of prior domain beliefs—for example, setting α > β emphasizes the influence of positive sentiment, whereas α < β places more weight on negative signals.
To construct day-level features from multiple headlines, we introduce a set of time-dependent weights to reflect the differential importance of headlines published at different times during the trading day. Let τ s i , j ( t ) [ 0 , 1 ) denote the normalized timestamp of the j-th headline, where τ = 0 corresponds to midnight and τ = 1 corresponds to the market close. We define the normalized exponential decay weight:
w s i , j ( t ) = exp λ τ s i , j ( t ) k = 1 n t exp λ τ s i , k ( t ) ,
where λ 0 controls the rate of decay; setting λ = 0 yields uniform weighting, while higher values emphasize earlier headlines.
Using the weighted sentiment scores, we compute a sequence of summary statistics for each day t. The first and second weighted raw moments are given by
M 1 ( t ) = j = 1 n t w s i , j ( t ) s ˜ s i , j ( t ) , M 2 ( t ) = j = 1 n t w s i , j ( t ) s ˜ s i , j 2 ( t ) ,
from which we derive the weighted mean and variance:
s ¯ s i ( t ) = M 1 ( t ) , σ s i 2 ( t ) = M 2 ( t ) M 1 ( t ) 2 .
We further capture distributional shape by including the empirical range,
δ s i ( t ) = max j s ˜ s i , j ( t ) min j s ˜ s i , j ( t ) ,
and excess kurtosis, computed via the fourth central moment:
κ s i ( t ) = M 4 ( t ) 4 M 3 ( t ) M 1 ( t ) + 6 M 2 ( t ) M 1 ( t ) 2 3 M 1 ( t ) 4 σ s i 2 ( t ) 2 3 ,
where M r ( t ) = j = 1 n t w s i , j ( t ) s ˜ s i , j ( t ) r for r = 3 , 4 . Finally, the day-level sentiment vector is defined as
s s i ( t ) = s ¯ s i ( t ) , σ s i ( t ) , δ s i ( t ) , κ s i ( t ) R 4 ,
which is concatenated with the corresponding structured market features to yield the multimodal input representation x s i ( t ) R d , where d = d + 4 . In our implementation, both α and β were set to 1 by default, reflecting symmetrical treatment of positive and negative sentiment in the scalarized score. While this baseline captures general sentiment dynamics, we acknowledge that dynamically adjusting these weights based on market regimes (e.g., emphasizing negative sentiment during high-volatility periods) presents a promising avenue for future work. We also set λ = 1.5 after tuning over the validation set using cross-validated AUC.

4.3. Feature and Statistical Indicator Construction

Let P s i close ( t ) denote the adjusted closing price of asset s i S at trading day t, and let V s i ( t ) represent its traded volume. We define a suite of widely adopted time-domain indicators from quantitative finance to construct the structured component of the feature vector z s i ( t ) R d z . All indicators are computed using causal information (i.e., using data up to and including time t) to avoid lookahead bias. We begin with the daily logarithmic return, defined as
r s i ( t ) : = log P s i close ( t ) P s i close ( t 1 ) ,
which captures relative price changes on a multiplicative scale and is stationary under geometric Brownian motion assumptions.
Next, we define the simple moving average (SMA) of length k N over the past k days as
MA k ( t ) : = 1 k τ = 0 k 1 P s i close ( t τ ) ,
which smooths short-term noise and captures local trend levels. The moving average is often used in conjunction with momentum indicators. We then estimate empirical return volatility over a window of size k using the sample variance:
σ ^ k 2 ( t ) : = 1 k τ = 0 k 1 r s i ( t τ ) r ¯ k ( t ) 2 , where r ¯ k ( t ) = 1 k τ = 0 k 1 r s i ( t τ ) ,
and σ ^ k ( t ) : = σ ^ k 2 ( t ) denotes the realized volatility.
The final structured feature vector for stock s i on day t, denoted as z s i ( t ) , concatenates all computed indicators, including returns, price-level trends, volatility, momentum oscillators, and volume-based metrics (if applicable). The full multimodal input to the predictive model is given by
x s i ( t ) : = s s i ( t ) z s i ( t ) R d ,
where denotes vector concatenation and d = d s + d z is the total feature dimensionality, with  d s = 4 corresponding to sentiment features (as in Section 4.2).

4.4. Learning Formulation

Let y s i ( t ) { 0 , 1 } denote the binary directional label associated with stock s i on day t, defined as
y s i ( t ) : = I P s i close ( t + 1 ) > P s i close ( t ) ,
where I [ · ] is the indicator function. This formulation captures the short-term upward price movement signal and transforms the forecasting task into a supervised binary classification problem.
Let D = x s i ( t ) , y s i ( t ) s i , t denote the full dataset consisting of input–output pairs, where x s i ( t ) R d is the multimodal feature vector, constructed as described in the previous sections. The objective is to learn a parametric function f θ : R d [ 0 , 1 ] , parameterized by θ , that approximates the posterior class probability P y = 1 x , given observed features.
We employ gradient boosted decision trees (GBDTs), implemented via the XGBoost framework, as the predictive model class. Each learned function f θ is an ensemble of regression trees:
f θ ( x ) = m = 1 M f m ( x ) , f m F ,
where F denotes the space of regression trees, M is the number of boosting rounds, and each f m corresponds to a tree structure with split nodes and leaf weights. The optimization objective over the dataset D is given by the following regularized empirical risk:
L ( θ ) = ( x , y ) D y , f θ ( x ) + m = 1 M Ω ( f m ) ,
where is the binary cross-entropy loss, defined by
( y , y ^ ) = y log ( y ^ ) ( 1 y ) log ( 1 y ^ ) , y ^ = f θ ( x ) ,
and Ω ( f m ) is a regularization functional designed to penalize model complexity. In the XGBoost setting, this is typically defined as
Ω ( f m ) = γ T m + 1 2 λ j = 1 T m w j 2 ,
where T m is the number of leaves in tree f m , w j R is the weight of the j-th leaf, and  γ , λ > 0 are hyperparameters controlling tree complexity and leaf shrinkage, respectively. This additive regularization prevents overfitting by discouraging overly deep trees and excessively large predictions.
Optimization proceeds via functional gradient descent in function space, where each new tree f m fits the first-order gradient of the loss function with respect to the current prediction. That is, letting y ^ ( t ) = f θ ( m 1 ) ( x s i ( t ) ) , the next tree is trained to minimize the residual:
g ( t ) = ( y ( t ) , y ^ ( t ) ) y ^ ( t ) .
We selected XGBoost as the base model due to its native compatibility with TreeSHAP, which enables precise, additive feature attributions that are critical for interpretability in regulated financial contexts. Although sequence models like LSTMs can capture temporal dependencies, we prioritized transparency and explainability over potential gains in sequential modeling.

4.5. Differential Privacy Integration

To ensure that our model maintains rigorous privacy guarantees in settings involving sensitive or proprietary financial data (e.g., client trades, confidential news feeds), we incorporate differential privacy (DP) mechanisms into the model training pipeline. Differential privacy offers formal protection against membership inference attacks and overfitting to individual data points, making it particularly relevant for financial machine learning systems deployed across clients, institutions, or regulatory boundaries.
A randomized mechanism M : D R is said to satisfy ( ε , δ ) -differential privacy if for all adjacent datasets D , D D differing in at most one record and for all measurable subsets S R ,
P [ M ( D ) S ] e ε · P [ M ( D ) S ] + δ ,
where ε > 0 is the privacy budget and δ 0 is a negligible slack term.
We achieve differential privacy by applying DP-SGD (Differentially Private Stochastic Gradient Descent) to the training of our XGBoost classifier, specifically by implementing the gradient perturbation strategy described in the differential privacy literature, which involves adding noise to the gradients during each training step to ensure privacy protection [28], wherein per-sample gradients are clipped to a fixed 2 -norm and Gaussian noise is added to the aggregated gradient before parameter updates. Formally, for a minibatch B D , we define:
g ˜ i : = clip θ ( x i , y i ) , C , g ¯ : = 1 | B | i B g ˜ i + N ( 0 , σ 2 C 2 I ) ,
where C is the clipping norm and σ is the noise multiplier. This procedure ensures that the gradient-based optimization respects ( ε , δ ) -differential privacy after accounting for total composition across training epochs via the moments accountant method.
To preserve the utility–privacy trade-off, we apply DP only during the model-fitting stage, while retaining non-private, interpretable features and SHAP-based explanations during inference. The final output distribution thus benefits from privacy-preserving training while maintaining transparent decision logic. Forward-fill imputation was selected for its simplicity and ability to preserve time-consistent feature alignment. However, we note that injecting Gaussian noise or using alternative imputation strategies can introduce beneficial stochasticity, and in ablation studies, such noise injection yielded modest performance improvements under high-volatility market conditions.

5. Model Interpretability via SHAP

In high-stakes decision-making environments such as algorithmic trading and asset management, model interpretability is not merely a desideratum—it is a regulatory and operational necessity. Financial institutions and compliance officers must be able to audit the rationale behind algorithmic forecasts, particularly when such predictions inform capital allocation, hedging strategies, or automated execution logic. Black-box predictive models—especially those involving nonlinear interactions and high-dimensional features—pose a severe challenge to such accountability, often leading to mistrust and restricted deployment.
To address this interpretability gap, we utilize SHAP (SHapley Additive exPlanations) [4], a principled post hoc explanation framework that attributes output predictions to individual input features. Rooted in cooperative game theory, SHAP decomposes the model prediction into additive contributions while satisfying a set of desirable axioms, including efficiency, symmetry, nullity, and linearity. Unlike heuristic attribution techniques, SHAP provides formal guarantees of consistency and fairness, making it particularly well-suited for financial applications, where interpretability must be quantifiable and reproducible.

5.1. Additive Feature Attribution Framework

Consider a trained predictive function f θ : R d [ 0 , 1 ] , where x ( t ) = ( x 1 ( t ) , , x d ( t ) ) denotes the input feature vector at time t, and the output f θ ( x ( t ) ) [ 0 , 1 ] represents the predicted probability of upward stock movement. SHAP approximates this nonlinear function by an additive linear expansion around a reference input:
f θ ( x ( t ) ) = ϕ 0 + j = 1 d ϕ j ( t ) ,
where ϕ 0 : = E x [ f θ ( x ) ] is the baseline model output over the training distribution, and  ϕ j ( t ) R is the Shapley value associated with feature j for sample t. Intuitively, ϕ j ( t ) represents the marginal effect of x j ( t ) in the context of all possible feature coalitions. Let F = { 1 , , d } denote the index set of input features. The Shapley value for feature j F is defined as the weighted average of its marginal contribution across all subsets S F { j } :
ϕ j ( t ) = S F { j } | S | ! ( d | S | 1 ) ! d ! f θ x S { j } ( t ) f θ x S ( t ) ,
where x S ( t ) denotes the instance where only features in S are retained and all others are replaced by a predefined background value (e.g., mean, median, or zero). The term inside the brackets quantifies the marginal contribution of feature j in the context of subset S, and the weighting term ensures fairness across all possible permutations.
Although the exact computation of Equation (4) requires exponential time in d, the TreeSHAP algorithm enables efficient and exact computation for decision tree ensembles such as XGBoost by leveraging recursive structure and conditional independence.

5.2. Axiomatic Properties of SHAP

SHAP is uniquely characterized by its adherence to the following axiomatic properties:
  • Efficiency:
    j = 1 d ϕ j ( t ) = f θ ( x ( t ) ) ϕ 0 ,
    ensuring that the total attribution is conserved and matches the model’s deviation from baseline.
  • Symmetry: If two features contribute equally across all coalitions S, then their attributions must be equal:
    S F { j , k } , f θ ( x S { j } ) = f θ ( x S { k } ) ϕ j = ϕ k .
  • Nullity: If feature j has no effect on any subset prediction, then ϕ j = 0 .
  • Linearity: For any two models f and g, and scalars a , b R ,
    ϕ j ( a f + b g ) = a ϕ j ( f ) + b ϕ j ( g ) ,
    preserving attribution under linear combinations of models.
These axioms ensure that the interpretability results are both theoretically principled and practically invariant under model transformations.

5.3. Global Feature Importance via Aggregation

Although Shapley values are inherently local to each prediction, we can compute global feature importance by aggregating absolute attributions across a dataset of size N. For feature j, the global importance metric is defined as
I j : = 1 N t = 1 N | ϕ j ( t ) | ,
which serves as an unbiased estimator of the expected marginal effect of feature j on the model output. The values { I j } j = 1 d enable robust feature ranking and facilitate diagnostics such as feature selection, redundancy detection, and economic relevance analysis.

5.4. Sentiment Attribution in Multimodal Feature Space

In our setting, let S sent F denote the index set corresponding to FinBERT-derived sentiment features, including daily mean sentiment, sentiment standard deviation, and polarity extremes. For each instance t, these features receive local Shapley values { ϕ j ( t ) } j S sent , which quantify their specific contributions to the model output.
We define the cumulative global importance of the sentiment subspace as
I sent : = j S sent I j ,
which measures the net predictive contribution of textual information extracted from financial headlines. Empirical results (see Section 6) show that I sent is consistently high across time periods and market regimes, confirming the complementary value of linguistic sentiment when fused with traditional technical and statistical signals.

6. Experiments

6.1. Experimental Setup

6.1.1. Dataset Description

We constructed a multimodal dataset consisting of historical stock prices and financial news headlines for a selected subset of S&P 500 companies over the period from January 2018 to December 2023. Daily stock market data, including open, high, low, close (OHLC) prices and trading volume, were obtained from Yahoo Finance (https://finance.yahoo.com (accessed on 10 August 2025)) using the yfinance Python library (https://github.com/ranaroussi/yfinance (accessed on 10 August 2025)). Financial news headlines were aggregated from various reputable sources (e.g., Reuters, Bloomberg, MarketWatch) via public APIs, licensed news aggregators (e.g., RavenPack, News API), and commercial financial datasets, with all news data timestamped and aligned to market trading hours to preserve temporal causality. For reproducibility and benchmarking, all of our used data can be found at https://github.com/Zdong104/FNSPID (accessed on 10 August 2025).
For each stock on trading day t, we aligned price and volume features with sentiment signals extracted from all headlines { h j ( t ) } published within a 24 h window preceding market close on t. News appearing after 4:00 p.m. EST was assigned to the next trading day to preserve temporal causality.
We note that our focus on S&P 500 large-cap stocks was motivated by the availability of reliable, high-frequency news coverage, which is necessary for consistent headline-based sentiment modeling. This choice ensured sufficient textual data density across firms and time, enabling meaningful signal extraction and robust evaluation. However, we acknowledge that this introduced a media exposure bias, as large-cap firms typically receive more consistent and timely coverage compared to mid- or small-cap stocks. In lower-visibility contexts, model performance may degrade due to sparser or noisier sentiment signals, and this remains an important limitation on generalization. Nonetheless, the use of the S&P 500 is justified by its liquidity, representativeness of major market sectors, and wide acceptance as a benchmarking index in both academic and industry settings.

6.1.2. Label Definition

The final input feature vector x ( t ) for each trading day t was constructed by concatenating multiple feature groups that capture both quantitative market behavior and textual sentiment signals. These include the following: (i) technical indicators such as k-day moving averages, the relative strength index (RSI), moving average convergence divergence (MACD), and volatility estimators (e.g., rolling standard deviation); (ii) FinBERT-derived sentiment features computed from daily financial news, including the average sentiment score, maximum sentiment, and sentiment dispersion (standard deviation); and (iii) lagged price-based features, specifically the log returns over the past five trading days. To ensure numerical stability and comparability across features, all inputs were standardized using z-score normalization:
x ^ i ( t ) = x i ( t ) μ i σ i
where μ i and σ i denote the sample mean and standard deviation of feature x i computed from the training set. This normalization was applied independently to each feature dimension, preserving temporal integrity and preventing information leakage.

6.1.3. Model Configuration

We trained an XGBoost classifier with a logistic loss function and hyperparameters selected via cross-validation on the training set. The final configuration included a maximum tree depth of 6, a learning rate of 0.05, 300 boosting rounds, a subsample ratio of 0.8 for each boosting iteration, and a column subsample ratio of 0.8 at the tree level. To ensure temporal integrity and avoid lookahead bias, we adopted a rolling-window evaluation strategy. Specifically, for each evaluation fold, the model was trained on a historical window [ t 0 , t 1 ] and evaluated on a subsequent window [ t 1 + 1 , t 2 ] , thereby preserving chronological order and simulating a realistic forecasting scenario.

6.2. Baselines and Ablation Studies

To evaluate the effectiveness of our proposed framework, we compared the FinBERT-enhanced model (T + F) against two baselines: a technical-only model (T) that relies solely on price and volume-based indicators, and a sentiment-augmented model (T + V) that incorporates sentiment scores derived from the VADER lexicon. Unlike these baselines, our model leverages FinBERT to extract domain-specific sentiment features from financial news, capturing nuanced linguistic signals. Additionally, we conducted ablation studies by systematically removing key feature groups—such as FinBERT sentiment, volatility indicators, and momentum signals—to assess their individual contributions to performance. These experiments allowed us to quantify the marginal utility of each component and demonstrate that FinBERT-derived sentiment features significantly enhance predictive accuracy and model robustness.

6.3. Results

6.3.1. Predictive Performance

To assess the efficacy of incorporating sentiment features—particularly those extracted using FinBERT—we evaluated the predictive performance of five model variants across a six-year dataset spanning January 2018 to December 2023. We used five rolling test windows, each covering six months of held-out data, with training performed on the preceding 24 months. Table 1 reports the mean scores across windows.
As shown in Table 1, the addition of FinBERT sentiment features (T + F) leads to a consistent and statistically significant improvement across all performance metrics. The model achieves a relative gain of 12.6% in AUC over the technical-only baseline, and a 26.3% increase in average simulated PnL (the reported PnL figures assume idealized execution, and do not account for transaction costs, bid-ask spreads, slippage, or market impact. These simplifications may overstate real-world profitability, and are used solely for model comparison under controlled conditions.). Precision improves from 0.601 to 0.678, indicating a higher proportion of correctly predicted upward movements, which is particularly valuable in directional trading strategies.
The model that integrates only VADER sentiment (T + V) yields improvements over the technical baseline, but performs notably worse than T + F in all metrics, highlighting the limitations of lexicon-based sentiment in financial contexts. Furthermore, the momentum-only variant (T + M) performs marginally better than T, suggesting that traditional trend-following indicators capture some temporal dependencies, but lack the broader informational context provided by news sentiment.
We performed paired t-tests on AUC and F1 across the five test periods. Improvements of T + F over both T and T + V are statistically significant at the 99% confidence level (p < 0.01), confirming the robustness of FinBERT sentiment features across multiple temporal regimes and stocks.

6.3.2. Ablation Findings

To further investigate the contribution of each feature group, we conducted controlled ablation experiments, wherein subsets of the input feature space were selectively removed from the T + F configuration. Table 2 shows comparsions of feature group mapping to prior literatures. The results are included in Table 1 under rows T + F − V (FinBERT without volatility) and T + F − M (FinBERT without momentum).
Removing volatility-based features caused a decrease of 1.8% in AUC and a 0.7% reduction in average precision. This reflects the sensitivity of short-term price movement to market uncertainty, which is not fully encoded in price trends or sentiment. Removing momentum indicators had a slightly more pronounced effect, reducing PnL by over 1% and AUC by 2.5 points, suggesting their non-trivial interaction with FinBERT sentiment during trend reversals or sideways markets.
Interestingly, the model trained without both momentum and volatility features (not shown in table) still outperformed all baselines except T + F, with an AUC of 0.707 and PnL of 6.05%, illustrating the dominant role of FinBERT-enhanced sentiment features in the full-stack architecture.

6.3.3. Feature Group Contributions Assessed via SHAP Analysis

To rigorously evaluate the relative importance of different feature categories, we applied a global SHAP analysis over the test set. For each sample, the SHAP values quantified the marginal contribution of each feature to the model’s output. We aggregated the absolute SHAP values by feature group and normalized them to obtain a global importance distribution.
Table 3 reports the percentage of the total SHAP contribution attributed to each group. FinBERT-based sentiment features—including the mean, maximum, and standard deviation of daily sentiment scores—emerge as the most influential group, accounting for nearly 29% of the total model attribution. Volatility-related indicators and momentum signals rank next, while raw price returns and volume statistics show lower but non-negligible importance.
To better visualize the relative ranking, we present a horizontal bar chart in Figure 3. This figure clearly shows that FinBERT-enhanced sentiment features are the dominant driver of predictive behavior in our model.
The alignment between SHAP-based importance and ablation results offers model-agnostic validation of feature utility. Furthermore, SHAP provides interpretable, post hoc explanations that are essential for risk-aware deployment in finance—an industry governed by auditability and transparency constraints.

6.3.4. Market Regime Stability

To assess the robustness and adaptability of each model under varying economic conditions, we evaluated predictive performance across distinct market regimes spanning the 2018–2023 period. Specifically, we partitioned the test data into three macro regimes:
  • Volatile Regime (February–April 2020): Marked by the onset of the COVID-19 pandemic and rapid equity market drawdowns.
  • Bullish Regime (May 2020–December 2021): Characterized by a prolonged market recovery and strong upward trends.
  • Stagnant/Sideways Regime (January–December 2022): Defined by high inflation, monetary tightening, and low directional bias in price movement.
Table 4 presents the regime-specific AUC scores and average simulated profit-and-loss (PnL) percentages for the three primary models: technical-only (T), technical with VADER (T + V), and technical with FinBERT (T + F).
The results reveal that the FinBERT-enhanced model (T + F) maintained high performance across all regimes, consistently achieving AUC scores above 0.72 and generating significantly higher PnL than the baselines. Notably, during the volatile COVID-19 period, the T + F model achieved a 4.9% average return, versus 1.2% and 2.0% for the technical-only and VADER models, respectively. This suggests that sentiment extracted from FinBERT effectively captures investor fear and news-driven risk signals that are not present in price-based features.
During the bullish recovery phase of 2020–2021, sentiment signals remained predictive, likely reflecting optimistic language in earnings reports and macroeconomic news. The T + F model achieved an AUC of 0.748 and a simulated PnL of 7.8%, outperforming the T + V baseline by over 3.5 percentage points.
In the stagnant regime of 2022, where traditional trend-following indicators tended to degrade in efficacy, the T + F model continued to deliver strong performance (AUC: 0.723), while the T and T + V models degraded to near-random classification levels (AUC: 0.593 and 0.614, respectively). This demonstrates that FinBERT sentiment features contribute not only discriminative power, but also resilience across structurally different market environments.
These results highlight that FinBERT-derived features generalize well under regime shifts, capturing latent behavioral and emotional cues that are difficult to model with price signals alone. In contrast, lexicon-based sentiment (T + V) improves modestly over T, but fails to deliver consistent gains during turbulent or directionless markets. Thus, FinBERT sentiment offers both predictive value and temporal robustness, supporting its integration into production-grade forecasting systems.

6.3.5. SHAP-Based Interpretation

To understand the inner decision-making process of our predictive model, we conducted a comprehensive interpretability analysis using TreeSHAP [28]. SHAP (SHapley Additive exPlanations) assigns each feature an additive contribution to the model’s output for a specific instance, allowing for both local and global interpretability.
We first present the global SHAP feature ranking in Table 5, listing the ten most influential features based on their average importance. FinBERT-derived features occupy three of the top five ranks, with the mean sentiment score contributing the most to prediction decisions. Volatility indicators (GARCH and rolling standard deviation) also play a critical role, reflecting the model’s sensitivity to market risk conditions. Classical technical indicators and price returns, while still relevant, exhibit lower average influence.
To illustrate how individual predictions were formed, we examined several SHAP force plots, which show how each feature contributes positively or negatively to the model’s output probability relative to the baseline prediction ϕ 0 . One representative case occurred on the day following a major Q3 2021 earnings announcement for a large-cap technology company. The FinBERT-derived mean sentiment score was strongly positive, at +0.86, with low dispersion (standard deviation = 0.12), indicating agreement among news sources. The model assigned a prediction score of 0.92 (upward movement), significantly above the dataset baseline of 0.51. The SHAP force plot for this instance confirmed that FinBERT sentiment features and reduced volatility were the dominant positive contributors to the model’s confidence in a bullish signal.
We also analyzed interactions between features using SHAP dependence plots. A particularly strong interaction was found between FinBERT mean sentiment and GARCH volatility. The model showed higher confidence in predictions when sentiment was positive and volatility was low, but became more conservative when volatility was elevated—even in the presence of strong sentiment. This behavior suggests that the model implicitly adjusts sentiment influence based on market uncertainty, a desirable property for risk-sensitive financial forecasting.
To verify that the model does not rely disproportionately on any single feature or regime, we generated SHAP summary plots across different time periods. These plots confirmed a smooth, multi-feature contribution distribution and revealed that sentiment features became significantly more important during news-heavy periods (e.g., earnings season, macroeconomic announcements). In contrast, technical features became more relevant during low-news, trend-driven intervals.
Finally, the SHAP results closely mirror those from the ablation and performance analysis, creating a consistent and interpretable story. The combination of FinBERT-enhanced sentiment and volatility measures provides a reliable foundation for both predictive accuracy and model transparency. These findings validate the use of SHAP in financial machine learning, enabling practitioners and analysts to not only build performant models, but also justify their behavior to regulators, stakeholders, and auditors.

6.4. Robustness and Generalization

To evaluate the robustness and generalization capability of our proposed model, we conducted cross-sectional and temporal experiments across multiple equity assets and market regimes. Specifically, we tested the FinBERT-enhanced (T + F) model against the technical-only (T) and VADER-based (T + V) baselines on a diverse set of U.S. large-cap stocks: Apple (AAPL), Microsoft (MSFT), JPMorgan Chase (JPM), and Tesla (TSLA). These stocks were selected to represent a mix of sectors (technology, financials, and consumer cyclicals) and volatility profiles.
Each stock was evaluated over three key temporal partitions:
  • Pre-COVID Period: January 2018 to December 2019—relatively stable market with strong growth.
  • COVID Crash: February 2020 to April 2020—high volatility, systemic panic, sharp sell-offs.
  • Post-COVID Recovery: May 2020 to December 2022—prolonged rebound, rotation across sectors.
Table 6 reports the average AUC and F1-scores for each model–stock pair, along with the standard deviation (SD) of these metrics across regimes. The FinBERT-enhanced model exhibits not only superior average performance, but also lower variance, indicating consistent predictive ability across assets and market conditions.
Across all stocks, the FinBERT-enhanced model achieved the highest average AUC and F1-score, outperforming both the technical-only and VADER baselines by margins ranging from 6.8 to 9.5 percentage points in AUC. Notably, the standard deviation of AUC across market regimes was substantially lower for the T + F model (mean SD: 0.021) than for T (0.036) or T + V (0.032), suggesting greater temporal stability and resilience to structural shifts in market dynamics. While VADER performs well in general-purpose sentiment analysis, it often misinterprets financial terminology—treating terms like “depreciation” or “shortfall” as inherently negative—highlighting the need for domain-specific models like FinBERT that better capture the nuances of financial language.
We also computed Sharpe ratios of the model-generated signals (using daily returns from a hypothetical long-short strategy) across regimes. The T + F model yielded Sharpe ratios consistently above 1.5 in the post-COVID period, and maintained ratios above 1.0 even during the crash period, whereas the ratios for T and T + V often fell below 1.0, indicating noisier, less risk-adjusted signal quality.
An analysis of error consistency revealed that T + F exhibited fewer false positives during high-volatility drawdowns and better recall during periods of trend reversals—likely due to FinBERT’s ability to encode forward-looking sentiment signals ahead of market realization.
From a generalization perspective, the model demonstrated minimal overfitting despite increased feature dimensionality, as evidenced by the narrow gap between in-sample and out-of-sample performance (mean difference in AUC < 0.02 across folds). This can be attributed to the regularization in the XGBoost framework and the relatively low correlation between FinBERT sentiment features and traditional indicators.
Overall, these findings confirm that FinBERT-enhanced sentiment features contribute not only predictive accuracy, but also robustness and generalization across both asset classes and temporal regimes. The model’s consistent performance across heterogeneous conditions makes it suitable for deployment in real-world, multi-asset trading systems, where stability and interpretability are paramount.

6.5. Privacy Protection Results

We report the model’s predictive performance under varying levels of differential privacy. The privacy mechanism follows DP-SGD with δ = 10 5 and varying privacy budgets ε { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.75 , 1.0 , 1.5 , 2.0 , 3.0 , 4.0 , 6.0 , 8.0 , } . Here, ε = corresponds to the non-private model.
The results in Table 7 provide a quantitative reference for selecting appropriate privacy budgets in production settings. Higher privacy (smaller ε ) leads to moderate decreases in all metrics. However, performance degradation remains bounded, with over 95% of the baseline AUC preserved for ε 0.5 .

7. Conclusions

This study presented a hybrid approach for stock price prediction that combines FinBERT-based sentiment analysis with traditional technical indicators, enhanced by SHAP explainability. By extracting domain-specific sentiment signals from financial news and integrating them into an XGBoost classifier, our model significantly outperforms both technical-only and lexicon-based sentiment baselines. Empirical results across multiple assets and market regimes confirm the predictive strength and robustness of FinBERT-enhanced features, while SHAP analysis reveals their consistent and interpretable contribution to the model’s decisions. This framework not only improves forecasting accuracy, but also provides transparency—an essential requirement in financial applications, where interpretability and regulatory compliance are critical.
Beyond its predictive and explanatory strengths, the model generalizes well across volatile, bullish, and stagnant regimes, demonstrating its resilience to structural shifts in market behavior. Our results show that the approach is robust across market sectors and volatility regimes. The integration of SHAP explanations supports informed decision-making by identifying when and why sentiment drives market movement. These findings highlight the value of combining domain-adapted NLP models with interpretable machine learning in finance.
In practical terms, investors and hedge funds can adopt the model as a signal-enhancement tool for short-term trading strategies, while risk managers may utilize the SHAP-based explanations for regime-aware exposure control and monitoring. Regulatory bodies can also benefit from the transparency offered by SHAP in audit, compliance, and model governance workflows. Additionally, the model’s lightweight, headline-based design enables near real-time deployment in production environments, making it suitable for integration into operational trading systems. Overall, this work represents a promising framework for developing deployable, interpretable trading systems. Future work may extend this framework to multi-asset portfolios, high-frequency pipelines, and richer financial text sources such as earnings calls and analyst reports.

Author Contributions

L.R.: Methodology, Software; Writing—Original Draft, Visualization. H.J.: Supervision, Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krauss, C.; Do, X.A.; Huck, N. Deep neural networks, gradient-boosted trees, random forests: Statistical arbitrage on the S&P 500. Eur. J. Oper. Res. 2017, 259, 689–702. [Google Scholar]
  2. Gu, S.; Kelly, B.; Xiu, D. Empirical asset pricing via machine learning. Rev. Financ. Stud. 2020, 33, 2223–2273. [Google Scholar] [CrossRef]
  3. Araci, D. Finbert: Financial sentiment analysis with pre-trained language models. arXiv 2019, arXiv:1908.10063. [Google Scholar]
  4. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4768–4777. [Google Scholar]
  5. Cowles, A. Stock market forecasting. Econom. J. Econom. Soc. 1944, 12, 206–214. [Google Scholar] [CrossRef]
  6. Modis, T. Technological forecasting at the stock market. Technol. Forecast. Soc. Chang. 1999, 62, 173–202. [Google Scholar] [CrossRef]
  7. Singh, S.; Madan, T.K.; Kumar, J.; Singh, A.K. Stock market forecasting using machine learning: Today and tomorrow. In Proceedings of the 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), Kannur, India, 5–6 July 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 1, pp. 738–745. [Google Scholar]
  8. Kumar, G.; Jain, S.; Singh, U.P. Stock market forecasting using computational intelligence: A survey. Arch. Comput. Methods Eng. 2021, 28, 1069–1101. [Google Scholar] [CrossRef]
  9. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  10. Fischer, T.; Krauss, C. Deep learning with long short-term memory networks for financial market predictions. Eur. J. Oper. Res. 2018, 270, 654–669. [Google Scholar] [CrossRef]
  11. Du, K.; Xing, F.; Mao, R.; Cambria, E. Financial sentiment analysis: Techniques and applications. ACM Comput. Surv. 2024, 56, 1–42. [Google Scholar] [CrossRef]
  12. Chan, S.W.; Chong, M.W. Sentiment analysis in financial texts. Decis. Support Syst. 2017, 94, 53–64. [Google Scholar] [CrossRef]
  13. Mishev, K.; Gjorgjevikj, A.; Vodenska, I.; Chitkushev, L.T.; Trajanov, D. Evaluation of sentiment analysis in finance: From lexicons to transformers. IEEE Access 2020, 8, 131662–131682. [Google Scholar] [CrossRef]
  14. Loughran, T.; McDonald, B. When is a liability not a liability? Textual analysis, dictionaries, and 10-Ks. J. Financ. 2011, 66, 35–65. [Google Scholar] [CrossRef]
  15. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  16. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
  17. Yang, Y.; Uy, M.C.S.; Huang, A. Finbert: A pretrained language model for financial communications. arXiv 2020, arXiv:2006.08097. [Google Scholar] [CrossRef]
  18. Gong, X.; Guan, K.; Chen, Q. The role of textual analysis in oil futures price forecasting based on machine learning approach. J. Futur. Mark. 2022, 42, 1987–2017. [Google Scholar] [CrossRef]
  19. Feng, Z.; Shi, R.; Jiang, Y.; Han, Y.; Ma, Z.; Ren, Y. A Multiscale Gradient Fusion Method for Color Image Edge Detection Using CBM3D Filtering. Sensors 2025, 25, 2031. [Google Scholar] [CrossRef]
  20. Huang, J.; Wang, J.; Li, Q.; Jin, X. Social Media Development and Multi-Modal Input for Stock Market Prediction: A Review. In Proceedings of the 2024 International Conference on Computing, Networking and Communications (ICNC), Big Island, HI, USA, 19–22 February 2024; IEEE Computer Society: Piscataway, NJ, USA, 2024; pp. 198–202. [Google Scholar]
  21. Gangwani, P.; Panthi, V. Leveraging multimodal data and deep learning for enhanced stock market prediction. In AI-Based Advanced Optimization Techniques for Edge Computing; John Wiley & Sons: Hoboken, NJ, USA, 2025; pp. 93–127. [Google Scholar]
  22. Upadhyay, P.; Tomar, P.; Yadav, S.P. Advancements in Alzheimer’s disease classification using deep learning frameworks for multimodal neuroimaging: A comprehensive review. Comput. Electr. Eng. 2024, 120, 109796. [Google Scholar] [CrossRef]
  23. Zhang, H.; Dong, L.; Gao, G.; Hu, H.; Wen, Y.; Guan, K. DeepQoE: A multimodal learning framework for video quality of experience (QoE) prediction. IEEE Trans. Multimed. 2020, 22, 3210–3223. [Google Scholar] [CrossRef]
  24. Ektefaie, Y.; Dasoulas, G.; Noori, A.; Farhat, M.; Zitnik, M. Multimodal learning with graphs. Nat. Mach. Intell. 2023, 5, 340–350. [Google Scholar] [CrossRef]
  25. Liang, P.P.; Zadeh, A.; Morency, L.P. Foundations & trends in multimodal machine learning: Principles, challenges, and open questions. ACM Comput. Surv. 2024, 56, 1–42. [Google Scholar] [CrossRef]
  26. Pan, Z.; Ying, Z.; Wang, Y.; Zhang, C.; Zhang, W.; Zhou, W.; Zhu, L. Feature-Based Machine Unlearning for Vertical Federated Learning in IoT Networks. IEEE Trans. Mob. Comput. 2025, 24, 5031–5044. [Google Scholar] [CrossRef]
  27. Man, X.; Luo, T.; Lin, J. Financial sentiment analysis (fsa): A survey. In Proceedings of the 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS), Taipei, Taiwan, 6–9 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 617–622. [Google Scholar]
  28. Lundberg, S.M.; Erion, G.G.; Lee, S.I. Consistent individualized feature attribution for tree ensembles. arXiv 2018, arXiv:1802.03888. [Google Scholar]
  29. Bao, W.; Yue, J.; Rao, Y. A deep learning framework for financial time series using stacked autoencoders and long-short term memory. Expert Syst. Appl. 2017, 80, 273–285. [Google Scholar] [CrossRef] [PubMed]
  30. Takeuchi, L.; Lee, Y. Applying deep learning to enhance momentum trading strategies in stocks. In University of Toronto Working Paper; Stanford University: Stanford, CA, USA, 2013. [Google Scholar]
  31. Li, M.; Chen, L.; Zhao, J.; Li, Q. Sentiment analysis of Chinese stock reviews based on BERT model. Appl. Intell. 2021, 51, 5016–5024. [Google Scholar] [CrossRef]
  32. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security; ACM: New York, NY, USA, 2016; pp. 308–318. [Google Scholar]
  33. Bollerslev, T. Generalized autoregressive conditional heteroskedasticity. J. Econom. 1986, 31, 307–327. [Google Scholar] [CrossRef]
Figure 1. Rolling time-aligned data flow.
Figure 1. Rolling time-aligned data flow.
Mathematics 13 02747 g001
Figure 2. Key temporal regimes defined for robustness analysis. The dataset is segmented into Pre-COVID (2018–01 to 2020–01), COVID Crash (2020–02 to 2020–04), Post-COVID Recovery (2020–05 to 2021–11), and the Inflation & Rate Hikes regime (2021–12 to 2023–12). These periods capture distinct market dynamics, including stability, shock, recovery, and policy tightening.
Figure 2. Key temporal regimes defined for robustness analysis. The dataset is segmented into Pre-COVID (2018–01 to 2020–01), COVID Crash (2020–02 to 2020–04), Post-COVID Recovery (2020–05 to 2021–11), and the Inflation & Rate Hikes regime (2021–12 to 2023–12). These periods capture distinct market dynamics, including stability, shock, recovery, and policy tightening.
Mathematics 13 02747 g002
Figure 3. SHAP feature importance by individual features.
Figure 3. SHAP feature importance by individual features.
Mathematics 13 02747 g003
Table 1. Extended model performance comparison (averaged over 5 rolling windows).
Table 1. Extended model performance comparison (averaged over 5 rolling windows).
ModelAccuracyF1-ScoreAUCPrecisionPnL (%)
T (Technical only)0.6240.6080.6520.6013.21
T + M (Technical + Momentum only)0.6360.6200.6610.6153.87
T + V (VADER sentiment)0.6440.6310.6690.6224.85
T + F (FinBERT sentiment)0.7030.6880.7400.6787.63
T + F − V (FinBERT w/o volatility)0.6890.6740.7220.6666.93
T + F − M (FinBERT w/o momentum)0.6760.6630.7150.6566.57
Table 2. Feature group mapping to the prior literature.
Table 2. Feature group mapping to the prior literature.
Feature GroupExample Features UsedSupporting References
Technical IndicatorsMoving Average (MA), Relative Strength Index (RSI), Log Returns, Volume Change[29,30]
Sentiment FeaturesFinBERT Mean Sentiment, FinBERT Sentiment Std, FinBERT Max Sentiment[31,32]
Volatility MetricsGARCH Volatility, Rolling Std of Log Returns[33]
Table 3. Relative SHAP importance by feature group (normalized to 100%).
Table 3. Relative SHAP importance by feature group (normalized to 100%).
Feature GroupConstituent FeaturesSHAP Importance (%)
FinBERT SentimentMean, Max, Std of sentiment scores28.6
Volatility IndicatorsGARCH volatility, rolling std21.4
Momentum IndicatorsMA(5), MA(10), RSI, MACD17.3
Price ReturnsLog returns r ( t ) , r ( t 1 ) , r ( t 2 ) 13.8
Volume FeaturesAvg daily volume, volume delta9.2
Table 4. Model performance by market regime (AUC/Avg. PnL%).
Table 4. Model performance by market regime (AUC/Avg. PnL%).
ModelVolatile (Q1 2020)Bullish (2020–2021)Stagnant (2022)
T (Technical only)0.608/1.2%0.642/3.6%0.593/2.4%
T + V (VADER sentiment)0.635/2.0%0.661/4.2%0.614/2.7%
T + F (FinBERT)0.729/4.9%0.748/7.8%0.723/6.2%
Table 5. Top 10 Features by Global SHAP importance.
Table 5. Top 10 Features by Global SHAP importance.
RankFeature (Category)SHAP Importance (%)
1FinBERT Mean Sentiment (Sentiment)11.2
2GARCH Volatility (Volatility)11.0
3Rolling Std. of Returns (Volatility)9.8
4FinBERT Max Sentiment (Sentiment)9.4
5FinBERT Sentiment Std. (Sentiment)8.0
6Log Return r t 2 (Returns)5.1
7Moving Average MA(5) (Technical)5.1
8Volume Change (Volume)4.8
9Log Return r t 1 (Returns)4.2
10Relative Strength Index (RSI) (Technical)3.7
Table 6. Generalization performance across stocks and market regimes.
Table 6. Generalization performance across stocks and market regimes.
StockModelAvg AUCAvg F1-ScoreAUC SD
AAPLT0.6240.6020.031
T + V0.6450.6250.027
T + F0.7130.6880.018
MSFTT0.6180.5960.035
T + V0.6410.6170.030
T + F0.7010.6750.021
JPMT0.5910.5770.038
T + V0.6100.5980.034
T + F0.6740.6550.025
TSLAT0.6480.6250.040
T + V0.6670.6440.037
T + F0.7350.7120.020
Table 7. Performance vs. privacy budget ε (averaged over 5 folds).
Table 7. Performance vs. privacy budget ε (averaged over 5 folds).
Privacy LevelAUCF1-ScorePnL (%) ε
DP ( ε = 0.1 )0.6620.6114.870.1
DP ( ε = 0.2 )0.6680.6175.030.2
DP ( ε = 0.3 )0.6740.6255.270.3
DP ( ε = 0.4 )0.6800.6305.420.4
DP ( ε = 0.5 )0.6990.6486.100.5
DP ( ε = 0.75 )0.7080.6566.450.75
DP ( ε = 1.0 )0.7170.6656.871.0
DP ( ε = 1.5 )0.7230.6707.031.5
DP ( ε = 2.0 )0.7260.6737.142.0
DP ( ε = 3.0 )0.7330.6787.383.0
DP ( ε = 4.0 )0.7350.6817.454.0
DP ( ε = 6.0 )0.7380.6867.596.0
DP ( ε = 8.0 )0.7390.6877.618.0
Non-private0.7400.6887.63
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ruan, L.; Jiang, H. Stock Price Prediction Using FinBERT-Enhanced Sentiment with SHAP Explainability and Differential Privacy. Mathematics 2025, 13, 2747. https://doi.org/10.3390/math13172747

AMA Style

Ruan L, Jiang H. Stock Price Prediction Using FinBERT-Enhanced Sentiment with SHAP Explainability and Differential Privacy. Mathematics. 2025; 13(17):2747. https://doi.org/10.3390/math13172747

Chicago/Turabian Style

Ruan, Linyan, and Haiwei Jiang. 2025. "Stock Price Prediction Using FinBERT-Enhanced Sentiment with SHAP Explainability and Differential Privacy" Mathematics 13, no. 17: 2747. https://doi.org/10.3390/math13172747

APA Style

Ruan, L., & Jiang, H. (2025). Stock Price Prediction Using FinBERT-Enhanced Sentiment with SHAP Explainability and Differential Privacy. Mathematics, 13(17), 2747. https://doi.org/10.3390/math13172747

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop