Next Article in Journal
The Impact of Philanthropic Donations on Corporate Future Stock Returns Under the Sustainable Development Philosophy—From the Perspective of ESG Rating Constraints
Previous Article in Journal
Can Soybean Tariff Shocks Trigger Abnormal Asymmetric Phenomena in Futures Markets? Evidence from the 2025 U.S.–China Trade Friction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Momentum-Based Normalization Framework for Generating Profitable Analyst Sentiment Signals

Department of Computer Science and Engineering, University of Colorado Denver, Denver, CO 80204, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Int. J. Financial Stud. 2026, 14(1), 4; https://doi.org/10.3390/ijfs14010004 (registering DOI)
Submission received: 31 October 2025 / Revised: 16 December 2025 / Accepted: 26 December 2025 / Published: 1 January 2026

Abstract

The diverse rating scales used by brokerage firms pose significant challenges for aggregating analyst recommendations in financial research. We develop a momentum-based normalization framework that transforms heterogeneous rating changes into standardized sentiment signals using firm-relative, past-only empirical distribution functions with event-based lookback and expanding global quantile classification. Using 68,660 rating events from 270 brokerage firms covering 106 large-cap U.S. stocks (2019–2025), our approach generates statistically significant Buy–Sell spreads at all horizons: 1-month (0.96%, t = 3.07, p = 0.002), 2-month (1.36%, t = 3.07, p = 0.002), and 3-month (1.94%, t = 3.66, p < 0.001). Fama–French six-factor regressions confirm 13.6% annualized alpha for Buy signals (t = 3.81) after controlling for market, size, value, profitability, investment, and momentum factors. True out-of-sample validation on May–September 2025 data achieves 107% retention of in-sample 1-month performance (four of five months positive), indicating robust signal generalization. The framework provides a theoretically grounded and empirically validated methodology for standardizing analyst sentiment suitable for quantitative investment strategies and academic research.

1. Introduction

Sell-side analysts at brokerage firms serve as critical information intermediaries in financial markets, continuously evaluating publicly traded companies and issuing investment recommendations that influence billions of dollars in capital allocation decisions. These recommendations typically follow a five-tier scale (Strong Buy, Buy, Hold, Sell, and Strong Sell), though individual firms often employ proprietary terminology such as ‘Overweight,’ ‘Outperform,’ or ‘Conviction Buy.’ Each rating represents the analyst’s assessment of a stock’s expected performance relative to the broader market or its sector peers over a specified time horizon, typically 12–18 months.
When new information emerges, such as quarterly earnings reports, changes in management guidance, competitive dynamics, or macroeconomic developments, analysts revise their recommendations through what the industry terms ‘upgrades’ or ‘downgrades.’ An upgrade, such as moving a stock from Hold to Buy, signals improved expectations for the stock’s future performance, while a downgrade, such as from Buy to Hold or Sell, indicates deteriorating prospects. These rating changes are particularly influential because they represent a change in analyst opinion rather than a static assessment. Womack (1996) documented that upgrades generate average abnormal returns of +2.4%, while downgrades produce −9.1% returns over six-month periods, demonstrating the substantial economic impact of these analyst actions on market prices.
However, the aggregation and interpretation of these sell-side analyst recommendations represent a fundamental methodological challenge in financial markets research and practice. While extensive literature documents the information content of analyst recommendations (Barber et al., 2001; Jegadeesh et al., 2004), researchers and practitioners face a persistent empirical challenge: the lack of standardized approaches for harmonizing diverse rating scales used by brokerage firms into coherent, predictive investment signals.
The magnitude of this heterogeneity problem is substantial. Our comprehensive dataset reveals 31 distinct rating terminologies used across major brokerage firms (Table 1), ranging from traditional five-point scales to idiosyncratic classifications such as “Conviction Buy” and “Sector Outperform.” Previous approaches have relied on simple linear mappings or database-provider standardizations that fail to account for firm-specific biases and the relative nature of rating changes within individual brokerage houses. Loh and Stulz (2011) provided evidence that only 12% of recommendation changes generate meaningful market impact, suggesting that current aggregation methods fail to isolate the most informative analyst signals.
This paper addresses these limitations with demonstrable results. Using a comprehensive dataset of 68,660 rating events from 270 brokerage firms covering 106 large-cap U.S. stocks from January 2019 through April 2025, we develop a momentum-based normalization framework using an event-based lookback that generates statistically significant Buy–Sell spreads at all horizons: a 3-month spread of 1.94% (t = 3.66, p < 0.001), a 2-month spread of 1.36% (t = 3.07, p = 0.002), and a 1-month spread of 0.96% (t = 3.07, p = 0.002). Risk-adjusted analysis using Fama–French six-factor regressions confirms an annualized alpha of 13.6% (t = 3.81) after controlling for market, size, value, profitability, investment, and momentum factors. True out-of-sample validation using fresh May to September 2025 data achieves 107% retention of in-sample 1-month performance (4/5 months positive), establishing that the momentum classification generalizes to unseen data.
Our approach incorporates the temporal dynamics of analyst behavior, leveraging well-documented momentum effects in financial markets (Ali & Hirshleifer, 2020; Jegadeesh & Titman, 1993) to enhance signal quality through firm-relative normalization procedures.
Our contribution to the literature is threefold. First, we develop a mathematically rigorous framework for normalizing analyst recommendations that accounts for both cross-sectional differences across firms and temporal changes in rating behavior. Second, we provide empirical evidence that momentum-based normalization significantly outperforms traditional heuristic approaches in predicting future returns across multiple time horizons. Third, we offer a practical implementation methodology that can be readily adopted by both academic researchers and investment practitioners.
This paper is structured as follows. Section 2 provides a comprehensive literature review and theoretical foundation. Section 3 presents our momentum normalization methodology and benchmark approaches. Section 4 details the performance evaluation framework. Section 5 reports the empirical results and robustness analysis. Section 6 discusses implications and practical implementation considerations. Section 7 concludes with directions for future research.

2. Literature Review and Theoretical Foundation

2.1. Analyst Recommendation Literature

The theoretical foundation for analyst recommendation research was established by Womack (1996), who documented significant abnormal returns following recommendation changes: +2.4% for upgrades and −9.1% for downgrades over six-month periods. This seminal contribution employed event study methodology but relied on binary classification schemes, potentially losing valuable information from the complete recommendation spectrum available to market participants.
Barber et al. (2001) advanced the field by developing consensus recommendation measures, demonstrating that portfolios of highly recommended stocks generate 102 basis points of monthly excess returns. Their methodology introduced systematic standardization by mapping all analyst recommendations to a numerical five-point scale:
R ¯ i t = 1 N i t j = 1 N i t R i j t
where R ¯ i t represents the consensus recommendation for stock i at time t and R i j t denotes the recommendation from analyst j mapped to their standardized scale:
  • 1 = Strong Buy (most bullish recommendation).
  • 2 = Buy.
  • 3 = Hold (neutral stance).
  • 4 = Sell.
  • 5 = Strong Sell (most bearish recommendation).
Note that lower numerical values indicate more favorable recommendations, which is the opposite of typical survey scales. This counterintuitive coding has become standard in academic research because it aligns with the convention that lower values represent ‘better’ ratings. However, this approach assumes cardinality and equal spacing between rating categories, assumptions challenged by subsequent research on ordinal data properties (Liddell & Kruschke, 2018).
Conrad et al. (2006) provided a crucial methodological advancement by applying ordered probit models to analyst recommendation changes, explicitly recognizing their ordinal nature. Their analysis of recommendation responses to major news events revealed asymmetric behavior: analysts are more likely to downgrade following negative news than to upgrade following positive news of similar magnitude. This asymmetry underscores the importance of treating recommendations as ordinal rather than cardinal data, as the “distance” between categories varies contextually.
Brav and Lehavy (2003) extended the empirical foundation by examining analyst target prices alongside recommendations, finding significant market reactions to target price revisions, with average abnormal returns ranging from −3.96% for unfavorable revisions to +3.21% for favorable revisions. Their documentation of systematic optimistic bias, with average target prices 28% above current market prices, parallels the distributional biases we observe in recommendation data, reinforcing the need for sophisticated normalization techniques that account for non-uniform category distributions.
Green (2006) demonstrated that early access to analyst recommendations provides substantial value to institutional clients, generating average two-day returns of 1.02% for upgrades and 1.50% for downgrades after transaction costs. This asymmetric information dissemination pattern highlights the temporal dynamics of recommendation impact and suggests that proper normalization methods capturing the timing of rating changes may help reduce information asymmetries across investor types.

2.2. Momentum and Information Diffusion

The momentum phenomenon in asset pricing, first systematically documented by Jegadeesh and Titman (1993), provides theoretical justification for our temporal approach to recommendation normalization. Their seminal finding that strategies buying past winners and selling past losers generate 12% annual returns establishes momentum as one of the most robust anomalies in financial markets. The classic momentum strategy of Jegadeesh and Titman (1993) follows a two-stage process: formation and holding. Their J / K momentum strategy is formalized as
R p , t + 1 : t + K J / K = 1 N W i W R i , t + 1 : t + K 1 N L i L R i , t + 1 : t + K
where J denotes the formation period length (typically 3, 6, 9, or 12 months) during which returns are calculated from month t J through t to identify winners and losers; W represents the winner portfolio containing stocks in the top decile (highest 10% returns during formation), with N W stocks; L represents the loser portfolio containing stocks in the bottom decile (lowest 10% returns during formation), with N L stocks; K denotes the holding period length (typically 3, 6, 9, or 12 months) spanning months t + 1 through t + K ; R i , t + 1 : t + K represents the total return of stock i over the K-month holding period; and R p , t + 1 : t + K J / K represents the portfolio return for the complete J/K strategy. In their most successful J = 6/K = 6 strategy, ranking stocks based on past 6-month returns (formation), constructing long–short portfolios of the top and bottom deciles, and holding for 6 months generates approximately 1% monthly returns, or 12% annually compounded.
Hong et al. (2000) provided the critical theoretical link between analyst coverage and momentum effects through their gradual information diffusion model. They argued that information travels slowly in markets, particularly for stocks with limited analyst coverage, creating predictable return patterns. Their empirical specification examines how momentum profits vary with analyst coverage:
π m o m e n t u m , i = α + β 1 · log ( 1 + Coverage i ) + β 2 · log ( Size i ) + ϵ i
where π m o m e n t u m , i represents the monthly momentum profit for stock i, measured as the return difference between past winners and losers (in percentage points per month); α represents the intercept baseline momentum profit when coverage and size are at minimum levels; β 1 represents the coefficient on analyst coverage (empirically negative: β 1 < 0 ), indicating that as coverage decreases, momentum profits increase; Coverage i represents the number of sell-side analysts providing earnings estimates for stock i from the I/B/E/S database; β 2 represents the coefficient on size included as a control variable (also typically negative); Size i represents the market capitalization of stock i in millions of dollars; and ϵ i represents the error term capturing unexplained variation. The negative sign on β 1 is the key finding: when β 1 < 0 , fewer analysts disseminating information leads to slower price adjustments and larger momentum opportunities. Empirically, stocks in the lowest coverage quintile generate momentum profits of 1.43% per month, while the highest coverage quintile generates only 0.26% per month, a difference of 1.17 percentage points monthly or approximately 14% annually, confirming that information diffusion speed, proxied by analyst coverage, is a primary driver of momentum profitability.
Recent research by Ali and Hirshleifer (2020) demonstrated that momentum effects extend through analyst coverage networks, generating a monthly alpha of 1.68% (t-statistic = 9.67). Their connected-firm momentum factor is constructed as
C F - M O M i , t = j C o n n e c t e d ( i ) w i j , t · r j , t 12 : t 2
where w i j , t represents the connection weight between firms i and j through shared analyst coverage and r j , t 12 : t 2 denotes the past return of connected firm j. This finding suggests that analyst recommendation patterns contain exploitable momentum signals transmitted across coverage networks.
Building on this foundation, Long et al. (2025) introduced net peer momentum (NPM), the excess return of analyst-connected peers over the focal firm, and demonstrated its predictive power in Chinese equity markets. Their key insight is that relative positioning versus peers, rather than absolute peer returns, drives predictability. High-NPM portfolios generate over 1.26% monthly excess returns, with NPM exhibiting stronger pricing power than simple peer momentum. Our momentum normalization framework operationalizes similar intuitions at the analyst-firm level. Rather than examining momentum spillovers across firms (as in Ali & Hirshleifer) or relative to peers (as in Long et al.), we normalize rating changes within each analyst firm’s own historical distribution. This firm-relative approach ensures that a “strong upgrade” from a conservative analyst firm (rare in their history) receives appropriate weight relative to routine upgrades from more volatile raters, analogous to how NPM captures the information content of relative peer positioning.
Lockwood et al. (2023) provided direct empirical evidence linking analyst recommendation changes to momentum profits through the interaction model:
R i , t + 1 = α + β 1 M O M i , t + β 2 Δ R e c i , t + β 3 ( M O M i , t × Δ R e c i , t ) + γ X i , t + ϵ i , t + 1
where Δ R e c i , t represents the recommendation change and β 3 > 0 indicates that upgrades amplify momentum returns. They found that upgrades from Hold to Buy increase momentum returns by 3.4% annually, with the interaction term β 3 statistically significant at the 1% level.
Jegadeesh and Kim (2006) extended the international evidence base, documenting cross-country variation in recommendation revision returns that can be modeled as
R i , c , t + 1 = α c + β c · Δ R e c i , c , t + ϵ i , c , t + 1
where c indexes countries and β c is highest in the United States, followed by Japan. This cross-market variation underscores the importance of adaptive normalization frameworks that can accommodate different institutional contexts.
The theoretical synthesis of momentum and analyst recommendations suggests that rating changes contain gradual-diffusion information that markets incorporate slowly. Our momentum-based framework operationalizes this insight by transforming discrete ordinal rating changes into continuous momentum scores, effectively combining the momentum principle in Equation (2) with analyst information diffusion patterns captured in Equations (3) through (5). This transformation captures both the direction and relative magnitude of analyst sentiment shifts within firm-specific contexts, providing a unified framework for extracting predictive signals from ordinal recommendation data.

2.3. Information Content and Market Efficiency

Asquith et al. (2005) analyzed the complete content of analyst reports, finding that recommendation changes contain information beyond earnings forecasts and price targets. Their comprehensive analysis of 1126 reports revealed that markets react most strongly to recommendation changes, suggesting that our focus on rating dynamics is well-founded.
Different responses to analyst recommendations across investor types represent a critical consideration for normalization frameworks. Malmendier and Shanthikumar (2007) documented that large (institutional) traders adjust for upward bias in recommendations, exerting no abnormal buy pressure following buy recommendations and selling pressure following hold recommendations. In contrast, small investors trade literally on recommendations, failing to account for well-documented biases, particularly those from affiliated analysts. This differential sophistication suggests that recommendation normalization methods may have varying effectiveness across investor segments.
Building on this theoretical foundation, we now present our momentum-based normalization framework that addresses the methodological challenges identified in the literature. The following sections detail our data sources and methodology, present our empirical findings, and discuss their implications for financial research and investment practice.

2.4. Machine Learning and Natural Language Processing Advances

Recent advances in machine learning and natural language processing have opened new frontiers for analyst recommendation research. Gu et al. (2020) demonstrated that machine learning methods can materially improve return predictability relative to traditional linear models, with neural networks and ensemble methods delivering economically meaningful gains over regression-based strategies. Their comprehensive comparison of ML techniques in asset pricing established methodological foundations that are directly applicable to analyst signal extraction.
The emergence of domain-specific language models has enabled more nuanced analysis of analyst reports and related financial text. Ke et al. (2019) demonstrated that textual analysis of financial news using supervised topic modeling can predict returns beyond what is captured by standard sentiment dictionaries or commercial vendors, highlighting substantial incremental information in news flow. More recently, Huang et al. (2023) developed FinBERT, a pre-trained language model fine-tuned on financial corpora, achieving state-of-the-art performance in sentiment classification and demonstrating that domain-specific pre-training significantly outperforms general-purpose models for financial applications.
Most recently, the application of large language models (LLMs) to financial analysis has attracted significant research attention. Lopez-Lira and Tang (2023) demonstrated that ChatGPT (GPT-3.5) can predict stock price movements from news headlines, with sentiment scores exhibiting significant predictive power for next-day returns. However, these approaches require extensive computational resources and careful model configuration, whereas our momentum-based framework provides a computationally efficient alternative that extracts signals from structured rating data without requiring full-text analysis.
Our primary contribution is a robust normalization methodology that addresses a fundamental challenge in analyst recommendation research: the diverse rating scales used by brokerage firms. The event-based lookback with expanding global quantile classification represents a methodologically rigorous solution that achieves statistically significant return spreads across all horizons (t > 3.0) and generates economically meaningful alpha after controlling for known risk factors.
Beyond its standalone value for quantitative investment strategies, this framework offers substantial benefits for machine learning research. The statistically validated Buy signals (FF6 alpha = 13.6%, t = 3.81) provide high-quality ground-truth labels for supervised learning models seeking to predict analyst sentiment from alternative data sources. Future research can leverage our normalization methodology to (1) train neural networks on text features from analyst reports, (2) validate whether alternative data (satellite imagery, web traffic, etc.) predicts analyst rating changes, and (3) develop real-time sentiment indicators that anticipate formal rating actions.

3. Methodology

In this section, we present our momentum-based normalization framework for transforming diverse analyst recommendations into standardized investment signals. Figure 1 provides a comprehensive overview of our five-stage methodology, illustrating the systematic transformation from raw analyst grades through mathematical processing to actionable investment signals. The framework addresses key challenges in analyst recommendation standardization: semantic heterogeneity across firms, temporal dynamics in rating behavior, and the need for balanced signal distributions suitable for systematic investment strategies.

3.1. Data Sources and Sample Construction

Our empirical analysis employs a comprehensive multi-source dataset spanning 1 January 2019 through 30 April 2025. The sample comprises 106 U.S.-listed large-capitalization stocks, specifically selected as the top 10 companies by market capitalization within each of the 12 Global Industry Classification Standard (GICS) sectors. This deliberate sample construction ensures the following:
  • Sufficient Analyst Coverage: Large-cap stocks attract multiple analyst firms (mean coverage: 28 firms per stock after name standardization), providing rich data for within-firm normalization.
  • Sector Diversification: Equal representation across sectors (Information Technology, Financials, Healthcare, Consumer Discretionary, etc.) prevents sector-specific biases.
  • Market Representativeness: These 106 stocks represent the top market concentration holdings across each sector.
  • Data Quality: Liquid stocks with continuous trading minimize microstructure noise.
The integrated dataset comprises three components:
  • Analyst Grades Dataset: 68,660 individual rating events from 270 distinct brokerage firms.
  • Consensus Recommendations: 7844 monthly aggregate recommendation distributions.
  • Price Data: 17,308 monthly dividend-adjusted observations.
Sample Size Justification: While 106 stocks may appear limited compared to studies using the entire Center for Research in Security Prices (CRSP) universe, our approach prioritizes depth over breadth. These stocks generate 68,660 individual rating events, with approximately 648 ratings per stock over the sample period. This density enables robust firm-specific normalization that would be impossible with thinly covered stocks. The trade-off between broad coverage and data richness reflects a conscious methodological choice: we require sufficient within-firm rating history to calculate meaningful momentum scores. Extending to smaller stocks with sporadic coverage would introduce noise rather than information.
We acknowledge that this creates a large-cap bias (discussed in Section 6), limiting generalizability to small-cap or international markets. However, for the objective of developing and validating a normalization methodology, this focused sample provides an ideal testing ground.

3.2. Rating Taxonomy and Standardization

We develop a comprehensive rating taxonomy that maps 31 unique grade classifications observed in our dataset to a five-point ordinal scale, as shown in Table 1. The mapping process follows a hierarchical structure based on semantic analysis of rating terminology, with category assignments aligned to standard industry practice where possible.
This taxonomy achieves complete coverage of all observed ratings while maintaining semantic consistency across firms. The ordinal ranking receives empirical validation through our momentum framework results, which demonstrate monotonic return patterns consistent with the semantic hierarchy: Buy signals (2.0% 1-month return) outperform Hold signals (1.1%), which outperform Sell signals (0.9%).
Robustness analysis involving reclassification of borderline-positive terms (e.g., “Positive,” “Above Average”) from Buy to Hold categories leaves our main results qualitatively unchanged, indicating that the findings are not sensitive to reasonable alternative taxonomy specifications.

3.3. The Momentum Normalization Framework

Our momentum-based approach operates through a systematic multi-stage process designed to capture both the direction and magnitude of analyst sentiment changes while controlling for firm-specific characteristics and avoiding look-ahead bias.

3.3.1. Mathematical Formulation

For each analyst rating event, we define the ordinal rating at time t for stock i by analyst firm j as R i j t { 1 , 2 , 3 , 4 , 5 } .
Step 1: Event-Based Lookback.
Rather than comparing to a fixed calendar lookback period, we employ an event-based lookback that compares the current rating to the Nth previous rating event for each analyst–stock pair. For analyst firm j rating stock i, let R i j k denote the kth rating event in chronological order. The grade change is calculated as
Δ R i j k = R i j k R i j , k N
where N = 3 is the lookback parameter. This approach ensures that grade changes reflect actual analyst sentiment shifts rather than arbitrary calendar intervals. When multiple rating events occur within a single calendar month, we retain the most recent grade change for that firm–stock–month combination, yielding the monthly grade change Δ R i j m for subsequent processing.
The event-based specification offers several advantages over calendar-based lookback: (1) it captures actual analyst opinion changes regardless of timing gaps between ratings, (2) it avoids stale comparisons when analysts do not rate every month, and (3) empirical analysis confirms superior signal quality, with 31% higher spreads (1.94% vs. 1.48%) and statistical significance at all horizons. The 3-event lookback aligns conceptually with quarterly earnings cycles while adapting to each analyst’s actual rating frequency.
Step 2: Past-Only Firm-Relative Normalization.
To control for diverse rating patterns across brokerage firms while maintaining strict temporal integrity, we normalize each grade change using exclusively historical information. For firm j evaluating stock i in month m, we calculate the momentum score using the empirical cumulative distribution function (ECDF):
M i j m = F j ( Δ R i j m | history before month m )
Expanding this notation, the ECDF calculation is
M i j m = # { Δ R j , m < Δ R i j m : m < m } + 0.5 × # { Δ R j , m = Δ R i j m : m < m } # { all Δ R j , m : m < m }
Detailed Component Breakdown:
Numerator Terms:
  • # { Δ R j , m < Δ R i j m : m < m } = count of historical grade changes by firm j that are strictly less than the current change.
  • 0.5 × # { Δ R j , m = Δ R i j m : m < m } = half-weight for historical changes equal to the current change (standard ECDF convention for ties).
  • The sum gives the rank position of the current change.
Denominator:
  • # { all Δ R j , m : m < m } = total count of all historical grade changes by firm j before month m.
Key Properties:
  • 0 M i j m 1 by construction (percentile rank).
  • Only uses past data: m < m ensures no look-ahead bias.
  • Firm-specific: normalizes within each brokerage’s historical distribution.
  • Cross-stock: includes firm j’s ratings on ALL stocks, not just stock i.
This transformation controls for firm-specific biases (e.g., Goldman Sachs may systematically issue more downgrades than Morgan Stanley) while preserving the relative strength of rating changes within each firm’s historical context.
Step 3: Stock-Level Aggregation.
For each stock–month combination, we aggregate individual momentum scores across covering analyst firms:
M ¯ i m = 1 J i m j = 1 J i m M i j m
where J i m represents the number of firms covering stock i in month m.
Step 4: Expanding Global Quantile Classification.
The continuous momentum scores must be converted to discrete investment signals (Buy/Hold/Sell). Rather than using fixed thresholds that could become stale, we employ expanding global quantiles that use all historical momentum scores prior to each month. This ensures no look-ahead bias in threshold calculation while capturing the full distributional history of the momentum metric:
S i m = Buy if M ¯ i m Q 75 , m e x p Hold if Q 25 , m e x p < M ¯ i m < Q 75 , m e x p Sell if M ¯ i m Q 25 , m e x p
Variable Definitions:
  • S i m = the investment signal for stock i in month m.
  • M ¯ i m = a stock i’s average momentum score in month m (from Step 3).
  • Q 25 , m e x p = the 25th percentile calculated using all stocks’ momentum scores from months prior to m (expanding window).
  • Q 75 , m e x p = the 75th percentile calculated using all stocks’ momentum scores from months prior to m (expanding window).
The following example demonstrates this classification process. Suppose the momentum scores for each of the 106 stocks in April 2023 are generated as follows:
  • Stock 1 (AAPL): 0.480;
  • Stock 2 (MSFT): 0.720;
  • Stock 3 (GOOGL): 0.250;
  • (103 more stocks).
From all historical momentum scores prior to April 2023, we calculate the following:
  • Q 25 e x p = 0.35;
  • Q 75 e x p = 0.65.
They are then used to classify each stock as follows:
  • MSFT (0.720 ≥ 0.65) → Buy;
  • AAPL (0.35 < 0.480 < 0.65) → Hold;
  • GOOGL (0.250 ≤ 0.35) → Sell.
Adaptive Nature: This approach produces approximately balanced portfolios (empirically: 27% Buy, 46% Hold, 27% Sell) while adapting to evolving market conditions through the expanding window. As more historical data accumulates, the quantile thresholds become increasingly stable, reducing noise in signal generation while maintaining responsiveness to genuine distributional shifts.

3.3.2. Comprehensive Illustrative Example: Goldman Sachs Rating Apple (AAPL)

Table 2 provides a complete walkthrough of the normalization logic, demonstrating how a raw grade change is transformed into a final investment signal.
Summary: Despite Goldman’s bearish downgrade, the aggregated signal is “Hold” due to offsetting bullish signals from other firms, demonstrating how our framework synthesizes diverse analyst opinions into unified investment signals.

3.4. Benchmark Models

We compare our momentum-based framework against two established heuristic approaches commonly employed in academic research and practical applications.

3.4.1. Plurality Vote Method

This approach classifies sentiment based on the modal recommendation category across covering analysts:
Signal = arg max c { N Buy , N Hold , N Sell }
where N c represents the combined count of analysts in category c, with Strong Buy and Buy recommendations combined into a single Buy category, and Strong Sell and Sell recommendations combined into a Sell category.

3.4.2. Buy-Ratio Threshold Method

This approach employs the proportion of positive recommendations with a fixed-threshold classification:
BuyRatio = N StrongBuy + N Buy all categories N
Classification follows predetermined thresholds:
  • Buy: BuyRatio ≥ 0.6.
  • Hold: 0.4 < BuyRatio < 0.6.
  • Sell: BuyRatio ≤ 0.4.

3.5. Data Coverage and Temporal Patterns

An important empirical finding concerns systematic differences in data coverage between consensus-based and grade-based approaches that have implications for model comparison and interpretation.
Coverage Statistics:
  • Consensus-based models: 7633 symbol–month observations.
  • Grade-based models (including momentum): 6151 symbol–month observations.
  • Coverage gap: 1482 symbol–months (19.4%) have consensus data but no rating events.
This discrepancy arises from fundamental differences in data structure rather than missing data:
Consensus Data Characteristics:
  • Provides monthly snapshots of aggregate analyst sentiment.
  • Includes carried-forward ratings when no changes occur.
  • Updates monthly regardless of analyst activity.
  • Example: If 10 analysts cover AAPL and none change ratings in February, consensus still reports “10 Buy, 0 Hold, 0 Sell”.
Grade-Based Data Characteristics:
  • Requires actual analyst rating events (upgrades/downgrades).
  • No observation if no analyst changes ratings.
  • Captures periods of active information processing.
  • Example: If no analyst changes AAPL ratings in February, no grade-based observation exists.
Temporal Analysis of Periods Without Rating Events: We analyzed the 1482 symbol–months lacking rating events and found the following:
  • Mean time since last analyst grade: 2.8 months.
  • Distribution highly skewed: 66% have exactly a 1-month gap.
  • Concentration in traditionally quiet periods (August, December).
  • Lower volatility during missing months (mean realized volatility: 18.2% vs. 23.1% for covered months).
Implications: The 2.8-month average gap between analyst actions aligns remarkably well with quarterly earnings cycles, providing external validation for our 3-month lookback specification. More importantly, this pattern suggests that the momentum approach naturally filters for periods of active information arrival, potentially enhancing signal quality by focusing on times when analysts actively revise their views rather than periods of rating inertia.
This coverage difference should be considered when comparing model performance: baseline models evaluate on a broader but potentially less informative sample, while momentum models concentrate on periods with demonstrated analyst engagement.
Having established our methodology and benchmark approaches, we now turn to the performance evaluation framework that will enable rigorous assessment of our momentum-based normalization approach relative to traditional methods.

4. Performance Evaluation Framework

4.1. Return Calculation Methodology

We calculate forward returns over multiple horizons to assess signal predictive performance comprehensively:
r i , t t + h = P i , t + h P i t 1
where P i t represents the dividend-adjusted closing price at time t and P i , t + h represents the price h months forward. We examine h { 1 , 2 , 3 } -month horizons to capture both short-term information diffusion dynamics and medium-term price adjustment processes. All returns are strictly forward-looking, measuring price appreciation from the signal generation date.

4.2. Portfolio Construction and Statistical Testing

For each methodology and time period, we construct equal-weighted portfolios based on signal classifications. The primary performance metric is the Buy–Sell return spread:
Spread h = r ¯ B u y , h r ¯ S e l l , h
where Spread h represents the return spread over h months; r ¯ B u y , h represents the mean forward return of the Buy portfolio over horizon h; and r ¯ S e l l , h represents the mean forward return of the Sell portfolio over horizon h.
Statistical significance is assessed using Welch’s t-test, appropriate for unequal sample sizes and potentially different variances across portfolio categories:
t = r ¯ B u y r ¯ S e l l s B u y 2 n B u y + s S e l l 2 n S e l l
where t represents the t-statistic; r ¯ B u y and r ¯ S e l l represent the mean returns for Buy and Sell portfolios, respectively; s B u y 2 and s S e l l 2 represent the sample variances; and n B u y and n S e l l represent the number of observations in each portfolio.
We supplement the primary results with a robustness analysis, including alternative threshold specifications and taxonomy sensitivity tests.
With our evaluation framework established, we now present the empirical results that demonstrate the superior performance of our momentum-based approach compared to traditional heuristic methods.

5. Empirical Results

5.1. Descriptive Statistics

Table 3 presents the summary statistics for our comprehensive dataset spanning January 2019 through April 2025.
The reduction from 7843 to 7633 consensus observations in subsequent tables reflects the requirement for forward price data availability for return calculations, as observations at the sample period end lack sufficient forward-looking price information.
The data reveal several important empirical patterns. The well-documented optimistic bias in analyst recommendations persists across methodologies, with plurality voting classifying 77.7% of observations as “Buy” compared to only 0.7% as “Sell.” The difference between consensus observations (7633) and grade-based signals (6151) reflects periods of analyst inactivity aligned with quarterly reporting cycles, as discussed in Section 3.

5.2. Main Results: Portfolio Performance Comparison

Table 4 presents our core empirical findings, demonstrating the superior predictive performance of the momentum approach across multiple temporal horizons.
The empirical results demonstrate the clear superiority of the momentum approach across several dimensions:
  • Economic Significance: The momentum framework generates economically meaningful return spreads that increase with investment horizon (0.96% → 1.36% → 1.94%), consistent with theoretical expectations of gradual information diffusion and price discovery processes.
  • Statistical Significance: The momentum approach achieves robust statistical significance across all temporal horizons, with p-values consistently below conventional significance levels (p < 0.003 for all horizons).
  • Logical Monotonicity: The momentum framework exhibits intuitive return ordering (Sell < Hold < Buy), while the Plurality Vote method displays paradoxical patterns, with Sell signals generating the highest returns, likely due to the extremely small sample size (57 observations) in the Sell category.

Statistical Significance and Robustness Analysis

The momentum framework’s consistent statistical significance across multiple time horizons warrants detailed examination, as this level of robustness is exceptional in financial markets research, where most documented anomalies exhibit significance only at specific holding periods or in particular market conditions.
Understanding the Statistical Tests: Our t-statistic of 3.66 for the 3-month Buy–Sell spread requires careful interpretation:
  • Null Hypothesis ( H 0 ): The true Buy–Sell spread equals zero (momentum signals have no predictive power).
  • Alternative Hypothesis ( H 1 ): The true Buy–Sell spread differs from zero.
  • Test Statistic: t = 3.66 indicates our observed spread (1.94%) is nearly four standard errors away from zero.
  • p-value Interpretation: p < 0.001 means that if the null hypothesis were true (no predictive power), we would observe a spread this large or larger in fewer than 0.1% of random samples.
Contextualizing the Magnitude: In financial markets, where the efficient market hypothesis suggests predictable returns should be arbitraged away, finding p ≤ 0.002 across multiple horizons is noteworthy:
  • 1-month horizon: Spread = 0.96%, t = 3.07, p = 0.002.
  • 2-month horizon: Spread = 1.36%, t = 3.07, p = 0.002.
  • 3-month horizon: Spread = 1.94%, t = 3.66, p < 0.001.
The consistency across horizons suggests that we have identified a robust phenomenon rather than a statistical artifact. Consistent with Harvey et al. (2016), who recommend a multiple-testing hurdle of t > 3.0 for new asset-pricing results, our 1- to 3-month spreads clear this stricter threshold (t = 3.07 to 3.66), remaining significant even under conservative family-wise error control.
Robustness Considerations: The t-statistic calculation uses Welch’s adjustment for unequal variances, appropriate given different sample sizes (1635 Buy signals vs. 1656 Sell signals for momentum, but with different volatilities; Table 5). The consistent significance suggests the results are not driven by outliers or specific market periods, though future research should examine subsample stability and out-of-sample performance.

5.3. Signal Distribution Analysis

Table 5 presents the distribution of investment signals generated by each methodology.
Table 5. Signal distribution by methodology.
Table 5. Signal distribution by methodology.
ModelBuyHoldSellTotal
Plurality Vote5931 (77.7%)1645 (21.6%)57 (0.7%)7633
Buy-Ratio Classification4145 (54.3%)1958 (25.7%)1530 (20.0%)7633
Momentum-Based1635 (26.6%)2860 (46.5%)1656 (26.9%)6151
The momentum framework generates a balanced distribution by design through quartile-based classification, while heuristic methods exhibit the well-documented optimistic bias in analyst recommendation distributions. The momentum model processes fewer total observations (6151 vs. 7633), as it requires sufficient historical data for calculating meaningful grade changes.

5.4. Robustness Analysis

5.4.1. Lookback Period Sensitivity Analysis

A critical methodological decision concerns how to calculate grade changes: calendar-based lookback (comparing to a rating N months prior) versus event-based lookback (comparing to the Nth previous rating event). Table 6 presents a comprehensive sensitivity analysis across both approaches and multiple lookback periods.
The results demonstrate the clear superiority of the event-based lookback: the three-event specification achieves statistical significance at all three horizons (p ≤ 0.002), whereas the calendar-based methods achieve significance only at isolated horizons. The event-based three-event lookback outperforms the best calendar-based specification by 31% (1.94% vs. 1.48% at 3M). This finding motivates our adoption of the event-based methodology, which better captures actual analyst sentiment shifts rather than arbitrary calendar intervals.

5.4.2. Quantile Threshold Research

The choice of quantile thresholds for signal classification represents another key methodological decision. Table 7 documents our systematic evaluation of threshold configurations.
Two critical findings emerge. First, expanding global quantiles substantially outperform monthly cross-sectional quantiles across all threshold specifications. Monthly cross-sectional approaches fail to achieve statistical significance, likely due to insufficient observations within single months for reliable threshold estimation. Second, among expanding approaches, Q25/Q75 achieves the highest t-statistic (3.66) and lowest p-value (0.0003), supporting our methodological choice.

5.4.3. Taxonomy Sensitivity Analysis

Sensitivity analysis involving reclassification of borderline-positive terms (e.g., “Positive,” “Above Average”) from Buy to Hold categories leaves our main results qualitatively unchanged. The Buy–Sell spread remains positive and statistically significant across all time horizons, indicating that our findings are not sensitive to reasonable alternative taxonomy specifications.
The empirical evidence presented demonstrates the clear superiority of our momentum-based framework. We now turn to a discussion of the economic mechanisms underlying this performance, practical implementation considerations, and the broader implications for financial research and investment practice.

5.5. Sector-Specific Firm Performance Analysis

While our momentum framework demonstrates robust performance across the entire sample, financial theory suggests that analyst expertise may exhibit sector-specific patterns. Different sectors possess distinct technological, regulatory, and competitive characteristics that could favor analysts with specialized industry knowledge. To investigate this variation in performance, we extend our analysis by identifying the top-performing analyst firms within each sector and examining whether concentrated firm expertise generates superior risk-adjusted returns compared to the aggregate momentum framework presented in Section 5.

5.5.1. Methodology: Firm-Isolated Momentum Scoring

We implement a refined version of our momentum framework that calculates performance metrics at the firm–sector level. The key methodological distinction is that each analyst firm’s momentum scores are calculated using their complete historical distribution across all stocks they cover, but performance is evaluated separately for each sector. This “firm-isolated” approach ensures the following:
  • Historical Context Preservation: Each firm’s momentum scores reflect its full behavioral history, avoiding sparse-data problems that could arise from sector-specific normalization.
  • Cross-Sector Comparability: Firms covering multiple sectors can be evaluated consistently using a unified scoring methodology.
  • Minimum Statistical Thresholds: We require at least five observations in each signal category (Buy/Hold/Sell) per firm–sector combination to ensure robust spread calculations, with a relaxed threshold of three observations for the Gold sector due to lower analyst coverage in precious metals equities.
For each firm–sector pair, we calculate the Buy–Sell spread using the methodology defined in Section 5, with the distinction that performance is evaluated separately within each sector:
Spread j , s = r ¯ Buy , j , s r ¯ Sell , j , s
where j indexes analyst firms, s indexes sectors, and r ¯ Buy , j , s represents the mean 3-month forward return for stocks in sector s receiving Buy signals from firm j.
We rank firms within each sector by this spread metric and identify the top-five performers per sector. The minimum observation threshold filters out spurious results, while the top-five selection balances specialization benefits against diversification considerations.

5.5.2. Top-Performing Firms by Sector

Table 8 presents the top-performing analyst firms identified through our sector-specific analysis, revealing substantial variation in firm expertise across market sectors.

5.5.3. Sector Expertise Patterns

Several noteworthy patterns emerge from the firm-level analysis:
Cross-Sector Specialists: Certain firms appear in multiple sectors’ top-five rankings, suggesting broad analytical capabilities. RBC Capital Markets ranks in the top five for six sectors (Consumer Discretionary, Energy, Gold, Real Estate, Technology, and Telecommunications), demonstrating consistent momentum signal quality across diverse industries. Similarly, Raymond James appears in five sectors (Financials, Gold, Healthcare, Real Estate, and Technology), indicating versatile expertise.
Figure 2. Performance of top-5 firms across sectors—Buy/Hold/Sell returns and spread. Bars show the mean 3-month forward returns for Buy (green), Hold (grey), and Sell (red) signals across each sector’s top-5 firms. The orange line with labeled points shows the Buy–Sell spread percentage. The Technology (10.6%) and Gold (8.3%) sectors exhibit the highest spreads, while Utilities (3.3%) and Consumer Staples (3.1%) show more modest but consistent performance.
Figure 2. Performance of top-5 firms across sectors—Buy/Hold/Sell returns and spread. Bars show the mean 3-month forward returns for Buy (green), Hold (grey), and Sell (red) signals across each sector’s top-5 firms. The orange line with labeled points shows the Buy–Sell spread percentage. The Technology (10.6%) and Gold (8.3%) sectors exhibit the highest spreads, while Utilities (3.3%) and Consumer Staples (3.1%) show more modest but consistent performance.
Ijfs 14 00004 g002
Sector Concentration: The best-performing spreads concentrate in sectors with higher information asymmetry and valuation uncertainty. The Gold sector exhibits the highest single-firm spread (Raymond James: 13.8%), followed by Technology (RBC: 14.7%) and Consumer Discretionary (RBC: 13.7%). In contrast, traditionally stable sectors like Utilities (4.1% best spread) and Consumer Staples (6.8%) show more modest differentiation, consistent with their lower volatility and more transparent business models.
Coverage–Performance Relationship: Sectors achieving 100% coverage (Energy, Consumer Staples, Industrials, Real Estate, Utilities) tend to exhibit lower average spreads (3.0–5.8%) compared to sectors with partial coverage. This suggests that comprehensive analyst coverage may reduce information advantages, supporting the gradual information diffusion hypothesis of Hong et al. (2000).

5.5.4. Aggregated Top-Five Performance vs. Baseline

To assess whether sector-specific firm selection generates superior performance, we construct portfolios using only momentum signals from each sector’s top-five firms and compare them against both our full-sample momentum framework and traditional baseline methods. The sector-by-sector comparison reveals consistent performance gains across all 12 market segments, with particularly strong results in sectors characterized by higher information asymmetry. The sector-by-sector analysis shows mean improvements of +7.21 percentage points versus the Majority Vote method and +5.94 percentage points versus the Buy-Ratio method.

5.5.5. Economic and Statistical Interpretation

Table 9 presents the sector-by-sector improvement analysis. The sector-specific approach generates a 5.8% average three-month Buy–Sell spread across sectors, representing a 3.9 percentage point improvement over the full momentum framework’s 1.94% spread. Annualized, the top-five approach translates to approximately 23.2% gross alpha, compared to 7.8% for the baseline momentum approach. The sector-by-sector improvement analysis shows substantial enhancements: +7.21 percentage points versus the Majority Vote and +5.94 percentage points versus the Buy-Ratio methods.
Several factors contribute to this performance enhancement:
  • Expertise Concentration: By restricting to demonstrated top performers within each sector, the approach effectively filters for analysts with superior information access, processing capabilities, or industry-specific expertise.
  • Behavioral Consistency: Firms ranking in the top five exhibit stable performance across time (see Section 5.5.7), suggesting persistent skill rather than transient luck.
  • Information Asymmetry Exploitation: The largest improvements occur in sectors with complex business models (Technology: average 10.6%; Gold: 8.4%; Consumer Discretionary: 9.6%), where specialized knowledge provides greater competitive advantage.
The sample size reduction is minimal (5816 vs. 6151 observations, only a 5% decrease), representing a favorable trade-off between coverage breadth and signal quality. This reduction stems primarily from requiring sufficient historical data to rank firms within each sector. The substantial performance improvement (4.1 percentage point spread increase) with minimal sample loss strongly supports the economic value of sector-specific firm selection.

5.5.6. Practical Implementation Considerations

Implementing the sector-specific approach introduces additional operational complexity, as described below.
Data Requirements: Beyond the base momentum framework’s requirements, practitioners require the following:
  • Continuous tracking of firm rankings within each sector.
  • Periodic recalibration to adapt to evolving firm performance.
  • A minimum of five observations per signal category (Buy/Hold/Sell) for reliable spread calculation per firm–sector pair (a threshold of 3 is used for sectors with lower analyst coverage, such as Gold).
Rebalancing Considerations: Monthly rebalancing of portfolio positions remains appropriate given the 3-month return horizon and the quarterly earnings cycle alignment documented in Section 3.
Scalability Constraints: The approach is best suited for the following:
  • Large-cap equities with multiple analyst coverage.
  • Institutional portfolios with access to comprehensive analyst data.
  • Long-only strategies or long–short implementations.

5.5.7. Temporal Stability Analysis

To assess the persistence of firm performance across time, we conducted quarterly rolling-window analysis using 12-month lookback periods. This analysis examines whether firms maintaining top-five rankings demonstrate consistent performance or exhibit significant temporal variation.
Methodology: For each quarter from Q1 2019 through Q2 2025, we (1) calculate firm rankings using trailing 12-month data within each window, (2) track which firms appear in the top five for each sector in each quarter, (3) measure the frequency of top-five appearances across all analyzed quarters, and (4) calculate the average rank when firms do appear in the top five.
Note on Sample Period: While the full dataset spans December 2011 through April 2025 (68,660 total grade events), the rolling-window stability analysis uses quarterly evaluation points from Q1 2019 through Q2 2025. Each quarterly evaluation requires 12 months of historical data for firm ranking calculations. Data availability varies by sector due to differences in analyst coverage patterns and signal generation requirements, resulting in 22–26 analyzable quarters per sector.
The heatmap visualization (Figure 3) illustrates quarterly firm rankings across four representative sectors over 22–26 quarters, where dark green indicates a #1 ranking, orange indicates a #5 ranking, and white indicates absence from the top five. Visual inspection reveals that certain firms (notably Deutsche Bank, Morgan Stanley, and Citigroup) maintain consistent shading across consecutive quarters, confirming meaningful performance persistence rather than random variation. Importantly, the most consistently appearing firms in Figure 3 differ from the highest-performing firms in Table 8, revealing a distinction between peak performance and sustained presence—for instance, while RBC Capital Markets ranks #1 in Technology performance (Table 8), Deutsche Bank and Morgan Stanley show broader cross-sector consistency, appearing in all four displayed sectors.
Notable Patterns:
  • Performance vs. Stability Trade-off: The most consistently present firms (Figure 3) differ from those achieving the highest absolute spreads (Table 8), demonstrating that temporal stability and peak performance represent distinct dimensions of analyst firm expertise.
  • Cross-Sector Consistency: Deutsche Bank and Morgan Stanley demonstrate strong temporal consistency, appearing in all four displayed sectors’ top-five rankings across multiple quarters. Deutsche Bank averages 58.7% presence, while Morgan Stanley averages 38.5% across these sectors, indicating robust analytical capabilities that persist across market conditions, despite not always achieving the highest spreads.
  • Sector-Specific Variation: Energy and Financials exhibit higher ranking stability (top firms at 100% presence) compared to Technology (top firm at 76.9%), suggesting different dynamics in information advantage persistence across sectors.
The analysis demonstrates that superior spreads are not artifacts of isolated periods but reflect sustained performance patterns suitable for systematic strategies.
Cross-Sector Firm Expertise Patterns:
Figure 4 displays two complementary heatmaps:
  • Left panel: Buy–Sell spread magnitudes across all firm–sector combinations, using the red-yellow-green gradient.
  • Right panel: Within-sector rankings (1 = best to 5 = fifth), using the green intensity scale.
The heatmaps reveal that 16 firms appear in multiple sectors’ top-five rankings, with RBC Capital Markets achieving top-five status across six sectors and Raymond James across five sectors. This cross-sector success suggests that certain firms possess analytical capabilities or information access advantages that transcend industry-specific expertise.

5.5.8. Implications for the Momentum Framework

These sector-specific findings extend our momentum normalization methodology in two important dimensions:
Theoretical Implications: The substantial performance improvement from firm-specific selection validates the hypothesis that analyst expertise exhibits sector-level variation. This aligns with the specialized knowledge hypothesis of Green (2006) and the information network theory of Ali and Hirshleifer (2020), suggesting that information advantages are not uniformly distributed across analyst firms.
Practical Implications: For practitioners implementing the momentum framework, the baseline approach (Section 5) provides robust performance using all available analysts and is appropriate for broad equity portfolios, while the sector-specific variant (this section) offers enhanced performance for investors able to implement selective coverage strategies. The additional alpha (4.1 percentage points, or approximately 16.4% annualized) must be weighed against implementation complexity and reduced diversification.
The firm-isolated momentum approach represents a natural extension of our core methodology, demonstrating that the temporal dynamics and firm-relative normalization principles maintain effectiveness when combined with sector-specific performance filtering. This layered approach—first normalizing across time and firms, then selecting top performers within sectors—provides a comprehensive framework for extracting maximum information content from analyst recommendation data.

6. Discussion and Practical Implementation

6.1. Economic Mechanisms and Literature Comparison

The momentum framework’s superior performance stems from three key theoretical mechanisms: (1) dynamic adaptation through time-varying thresholds and firm-specific normalization, (2) firm-specific calibration controlling for different rating philosophies and institutional biases across brokerage houses, and (3) focus on the information content of rating changes, capturing marginal information flow that drives price discovery.
Our 1.94% three-month return spread compares favorably with the analyst recommendation literature: Womack (1996) reports −9.1% six-month Sell returns, Barber et al. (2001) find 1.02% monthly consensus-based returns, Green (2006) documents 1.02–1.50% two-day returns from early recommendation access, and Jegadeesh et al. (2004) report 1.76% for upgrades and −3.21% for downgrades.

6.2. Out-of-Sample Validation

To address concerns about data snooping and overfitting (Harvey et al., 2016), we conducted pre-registered out-of-sample (OOS) validation using a strict temporal holdout design. Our methodology was developed and calibrated using data through April 2025, with all parameter choices (three-event lookback, expanding global quantile thresholds) finalized before examining any data from the OOS period.
The OOS test period spans May through September 2025, representing five complete months of forward data. The results demonstrate robust signal retention: the average one-month Buy–Sell spread of +1.03% in the OOS period represents 107% of the in-sample spread (0.96%), indicating no meaningful performance degradation. While individual months exhibit variability (ranging from −0.9% to +2.9%), this volatility is consistent with the in-sample standard errors and does not suggest a structural breakdown of the signal.
These OOS results provide important validation that our framework captures genuine predictive relationships rather than in-sample artifacts. The 107% retention rate compares favorably with McLean and Pontiff (2016), who documented a typical 30–50% post-publication decay for anomalies, suggesting that our strict pre-registration protocol and methodological choices (expanding quantiles, event-based lookback) successfully avoid the overfitting that commonly afflicts academic factor research.

6.3. Risk-Adjusted Performance

Beyond raw return spreads, we examined risk-adjusted performance using standard factor models and portfolio risk metrics. Using the Fama–French six-factor model (FF6), which includes market, size, value, profitability, investment, and momentum factors, we estimated monthly alpha regressions over the full 76-month sample period.
Individual signal portfolios demonstrate economically significant alphas: the Buy portfolio generates 13.6% annualized alpha (t = 3.81), while even the Sell portfolio produces a positive alpha of 12.5% (t = 3.09) when measured against the risk-free rate. However, the long–short spread portfolio does not generate statistically significant factor-adjusted alpha ( α = −1.3% annualized, t = −0.65). This pattern indicates that while our methodology successfully identifies relatively stronger and weaker stocks, both groups share similar factor exposures.
Portfolio risk metrics reveal favorable characteristics across signal categories. The Sharpe ratio, measuring excess return per unit of volatility, indicates strong risk-adjusted performance when exceeding 1.0; our Buy portfolio achieves 1.28, and Hold achieves 1.40, substantially outperforming the S&P 500 (0.73) and the Fama–French momentum factor (0.11), which captures the tendency of past price winners to continue outperforming past losers. The Sortino ratio, which penalizes only downside volatility, shows even stronger results (Buy: 1.75, Hold: 2.04), indicating the portfolios capture upside while limiting harmful drawdowns. Maximum drawdown remains contained at −12.2% for Buy and −9.8% for Hold, compared to −15.7% for Sell, consistent with the greater volatility typically observed in underperforming stocks.
To better understand the drivers of this risk-adjusted performance, we decompose the specific return characteristics and factor exposures of the individual signal portfolios below.

6.3.1. Hold Portfolio Characteristics

A notable finding is that the Hold portfolio exhibits the strongest risk-adjusted performance (Sharpe ratio 1.40 vs. 1.28 for Buy), despite generating slightly lower raw returns. Table 10 presents the complete risk-adjusted performance metrics by signal category. This pattern suggests that stocks receiving stable, unchanged analyst sentiment, neither upgraded nor downgraded in momentum-normalized terms, represent high-quality companies with favorable risk characteristics.
Several economic mechanisms may explain this phenomenon:
  • Quality Signal: Stable analyst sentiment often reflects consensus on fundamentally sound companies. When analysts collectively maintain positions without directional changes, this may indicate low uncertainty about firm quality and predictable earnings trajectories.
  • Lower Volatility: Hold-classified stocks exhibit the lowest portfolio volatility (7.4% annualized vs. 8.3% for Buy and 9.4% for Sell), contributing to superior Sharpe ratios despite marginally lower returns.
  • Defensive Characteristics: The Hold portfolio shows the most defensive market beta (−0.108 vs. −0.079 for Buy), potentially explaining its smaller maximum drawdown (−9.8% vs. −12.2%).
For practitioners, this finding suggests that momentum-normalized Hold signals identify stable, high-quality positions suitable for core portfolio holdings, while Buy signals may be more appropriate for tactical allocation decisions, where higher volatility is acceptable.

6.3.2. Understanding Long–Short Performance

The near-zero long–short alpha (−0.11% monthly, t = −0.65) warrants explanation given the significant individual portfolio alphas (Buy: 1.13%, t = 3.81; Sell: 1.04%, t = 3.09). Table 11 reveals that Buy and Sell portfolios exhibit remarkably similar factor exposures: all six Fama–French factors carry the same sign in both portfolios, with magnitude differences ranging from only 0.001 to 0.061.
The long–short portfolio earns a positive raw return (+0.11% monthly), as expected given that Buy outperforms Sell. However, because Buy has slightly larger positive factor loadings than Sell (e.g., SMB: 0.215 vs. 0.154), the long–short portfolio retains small but positive exposures to all six factors. These positive factor tilts generate approximately 0.22% in expected factor returns, fully explaining the raw spread and producing a slightly negative alpha after factor adjustment.
The convergent factor loadings arise because our sample consists of large-cap stocks with similar fundamental characteristics, and the momentum framework classifies stocks based on the direction of analyst sentiment change rather than firm attributes. Any stock in the universe can receive a Buy or Sell signal depending on recent analyst behavior, so both portfolios are drawn from the same pool of securities with comparable size, value, profitability, and investment profiles.
This finding has important implications for framework interpretation:
  • Stock Selection vs. Hedged Strategy: The framework excels at identifying stocks likely to outperform within their risk class (generating positive alpha for all signal categories), but the similar risk profiles prevent meaningful long–short alpha extraction.
  • Level vs. Spread Information: All portfolios generate positive alpha relative to factor benchmarks, indicating the analyst sentiment signal captures information orthogonal to standard factors. Within our large-cap sample, this information advantage manifests as level (all portfolios elevated above benchmark) rather than spread (differential performance between Buy and Sell), though this pattern may differ in more heterogeneous stock universes.
  • Implementation Guidance: Practitioners should employ the framework for long-only stock selection or tactical overweighting rather than market-neutral strategies. The Buy signal’s superior Sharpe ratio (1.28) relative to market benchmarks (0.73) validates this application.
These results suggest that the framework is most effective as a stock selection tool within a broader investment process rather than as a standalone long–short strategy. The significant individual portfolio alphas and superior Buy-signal Sharpe ratios validate the methodology’s practical utility for enhancing long-only portfolio construction.
Importantly, the signal effects remain robust after controlling for standard firm characteristics. Pooled regressions, including book-to-market ratio and firm size as controls, show that the Buy signal coefficient is unchanged (coefficient = 0.0126, t = 2.71, p = 0.007; Appendix A Table A8). This confirms that the momentum-based signal provides incremental predictive power beyond value and size effects captured by the Fama–French factors.

6.4. Equilibrium Considerations and Implementation Context

An important caveat concerns equilibrium implications if our momentum framework achieved widespread adoption. The framework’s primary contribution is methodological—providing a robust normalization technique for diverse analyst ratings to support downstream machine learning and classification tasks. The evidence of superior Buy–Sell spreads validates that our normalization preserves and enhances information content in analyst ratings, making it superior to naive approaches like simple averaging or plurality voting. In this context, widespread adoption would be beneficial, creating standardized signals across research applications.
If market participants extensively used our signals for direct portfolio construction, several equilibrium effects would emerge. The documented 1.94% spread would likely diminish due to arbitrage and alpha decay; historical precedent from other published anomalies suggests returns typically decay by 30–50% post-publication (McLean & Pontiff, 2016). However, several barriers preserve strategy effectiveness: collecting and standardizing ratings from 270+ firms requires specialized databases and ongoing taxonomy maintenance; preventing look-ahead bias demands careful historical data management; the strategy requires stocks with consistent multi-analyst coverage, limiting scalability; and incorporating momentum signals into existing factor models requires institutional adaptation. Additionally, retail investors and some institutions continue to interpret analyst recommendations at face value despite documented biases (Malmendier & Shanthikumar, 2007), suggesting that complete arbitrage is unlikely.
We emphasize that momentum-normalized signals represent only one input in comprehensive investment processes. Professional investors typically combine multiple signals, including fundamental analysis (valuation, quality metrics), technical indicators (price momentum, volume patterns), risk factors (market beta, size, value exposures), and alternative data (satellite imagery, web traffic, credit card data). Our framework’s value lies not in providing a standalone trading strategy but in offering a superior method to extract information from analyst recommendations within this broader context.

6.5. Implementation Considerations

Practical implementation requires attention to several operational considerations. Robust implementation requires a minimum of 12 months of firm-specific historical data for reliable distribution estimation, though performance improvements are observable with shorter histories. The framework employs vectorized operations, enabling real-time signal generation suitable for low-frequency or monthly rebalanced systematic strategies. Monthly rebalancing frequency balances signal decay considerations against realistic transaction cost constraints. The balanced signal distribution (27% Buy, 46% Hold, 27% Sell) facilitates portfolio construction and risk management compared to traditional approaches that generate highly skewed signal distributions.
Transaction Cost Viability: Appendix A Table A7 presents a detailed analysis assuming 20 basis points per round-trip for long positions and 40 basis points for short positions (including borrowing costs). Under these assumptions, the long–short strategy retains a net 3-month return of +1.30% (gross 1.94%), with a break-even cost threshold of approximately 130 basis points per round-trip—well above typical institutional trading costs for liquid large-cap equities. This confirms the strategy’s economic viability under realistic implementation assumptions.

Baseline vs. Sector-Specific Methodology Selection

Beyond transaction cost considerations, practitioners face a methodological choice between baseline and sector-specific momentum approaches. This decision depends on two empirically measurable characteristics: analyst firm stability and coverage completeness. Table 12 presents sector-specific implementation recommendations based on these criteria.
Sectors exhibiting high stability (exceeding 90%) and complete coverage are candidates for the sector-specific refinement, which offers an average 2.5 percentage point improvement in three-month Buy–Sell spreads. However, sectors with unstable analyst rankings or incomplete coverage should employ the baseline approach, which provides robust signal generation across all securities, regardless of which analyst firms happen to cover them.
The Technology sector presents a particularly instructive case. Despite achieving the highest three-month spread (10.6%), the sector exhibits only 77% stability, and the sector-specific approach covers only 70% of stocks (7 of 10 top holdings). The combination of analyst turnover and coverage gaps makes the baseline approach preferable for practitioners requiring comprehensive technology exposure.
Our framework was developed on large-cap stocks with substantial analyst coverage (mean: 28 unique firms per stock over the sample period, after company name standardization). Extension to smaller-capitalization securities with fewer than five covering analysts may degrade signal quality, as the ECDF estimation underlying momentum score calculation becomes less reliable with sparse rating observations. Practitioners considering such extensions should validate signal performance on appropriate subsamples before implementation.

6.6. Limitations and Model Assumptions

Our momentum-based framework relies on several key assumptions that warrant consideration.
Sample Scope and Generalizability: Our sample consists of the 10 largest companies by market capitalization within each GICS sector, creating a systematic bias toward large-cap firms with extensive analyst coverage. This design choice ensures robust signal estimation but limits generalizability. Mid-cap and small-cap equities, which typically receive sparser and more heterogeneous coverage, may exhibit different signal dynamics. International markets with varying analyst cultures, regulatory environments, and disclosure requirements (Jegadeesh & Kim, 2006) represent an important extension for future research. The framework’s effectiveness likely correlates with analyst coverage intensity, potentially limiting applicability to securities with fewer than 3–5 covering firms.
Data Requirements: The framework requires sufficient firm-specific historical data for reliable distribution estimation, resulting in reduced sample coverage, particularly in early periods, and creating cold-start problems for newly covered stocks or analysts.
Sample Selection and Fin-ALICE Framework: Our sample of 106 large-cap U.S. equities was selected as part of the broader Fin-ALICE research framework (McCarthy & Alaghband, 2024), which focuses on sector-level financial analysis. Specifically, we selected the top 10 holdings from each GICS sector based on sector ETF compositions. This design choice ensures (1) sufficient analyst coverage for robust momentum score calculation, (2) comparability across sectors, and (3) consistency with companion studies examining sector-level dynamics. While this selection approach limits generalizability to smaller capitalization stocks with sparser analyst coverage, it provides an ideal testbed for developing and validating our normalization methodology. The resulting signals serve as high-quality ground-truth labels for machine learning applications in the broader Fin-ALICE research agenda.
Out-of-Sample Horizon Dynamics: While our one-month OOS validation demonstrates strong signal retention (107%), predictive power decays at longer horizons (2-month: 5% retention; 3-month: negative). This pattern is consistent with information diffusion theory: analyst sentiment signals contain maximal predictive content immediately following rating changes, with predictive power dissipating as information becomes incorporated into market prices. Rather than undermining the contribution, this decay pattern supports the theoretical framework: momentum-normalized signals capture timely information that markets gradually absorb.
Market Regime Sensitivity: While our sample spans significant market disruptions, including COVID-19, the framework assumes relatively stable relationships between analyst sentiment changes and returns that may not hold during extreme market conditions or structural regime changes.

7. Conclusions

This research introduces a momentum-based normalization framework that transforms diverse analyst recommendations into standardized investment signals with robust statistical and economic significance. By incorporating the temporal dynamics of analyst sentiment changes and normalizing within firm-specific contexts, our approach generates statistically significant return spreads that persist across multiple investment horizons and market conditions.
Our contributions to the literature are threefold. First, we develop a mathematically rigorous framework for normalizing analyst recommendations that accounts for cross-sectional heterogeneity across more than 270 brokerage firms and the temporal evolution in rating behavior, using past-only ECDF normalization to avoid look-ahead bias while capturing firm-specific rating tendencies. Second, we provide comprehensive empirical evidence that momentum-based normalization significantly outperforms traditional heuristic approaches, generating 1.94% three-month Buy–Sell spreads (t = 3.66, p < 0.001) that meet the Harvey et al. (2016) multiple-testing hurdle. Third, we offer a practical implementation blueprint with complete mathematical specifications, enabling replication and extension by both academic researchers and practitioners.
Future research directions include adapting the framework to international markets with different analyst cultures and regulatory environments (Jegadeesh & Kim, 2006), incorporating machine learning methods to optimize time-varying thresholds and capture nonlinear patterns in analyst behavior, integrating these momentum signals into the broader Fin-ALICE framework as training labels for models that predict analyst rating changes, and exploring sector-specific sources of informational advantage to better understand heterogeneity in analyst skill and access. The demonstrated superiority of momentum normalization over traditional approaches underscores the importance of incorporating temporal dynamics in financial signal processing and provides a theoretically grounded, empirically validated solution to a persistent challenge in analyst recommendation research.

Author Contributions

Conceptualization, S.M. and G.A.; methodology, S.M.; software, S.M.; validation, S.M. and G.A.; formal analysis, S.M.; investigation, S.M.; resources, S.M.; data curation, S.M.; writing—original draft preparation, S.M.; writing—review and editing, S.M. and G.A.; visualization, S.M.; supervision, G.A.; project administration, G.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to licensing restrictions from the commercial data provider. The data and research project are made available for academic use only, subject to the terms of the original data license agreement.

Acknowledgments

The authors acknowledge the University of Colorado Denver for providing computational resources and research support.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ECDFEmpirical Cumulative Distribution Function
GICSGlobal Industry Classification Standard
CRSPCenter for Research in Security Prices
I/B/E/SInstitutional Brokers’ Estimate System

Appendix A. Supplementary Results

This appendix provides detailed tables supporting the robustness and risk-adjusted performance analyses discussed in Section 6.

Appendix A.1. Results Summary Against Significance Thresholds

Table A1 provides a consolidated summary of our key results against established statistical and economic thresholds.
Table A1. Results summary: statistical and economic thresholds.
Table A1. Results summary: statistical and economic thresholds.
CriterionOur ResultThresholdPass?Notes
In-Sample Statistical Significance
In-sample p-value (1M)0.0022<0.051M spread significant at 0.5%
In-sample p-value (2M)0.0022<0.052M spread significant at 0.5%
In-sample p-value (3M)0.0003<0.053M spread significant at 0.1%
Multiple Testing Corrections (Benjamini–Hochberg)
BH-adjusted p (1M)0.0032<0.05Survives FDR correction
BH-adjusted p (2M)0.0032<0.05Survives FDR correction
BH-adjusted p (3M)0.0006<0.05Survives FDR correction
BH-adjusted tests12/15majority80% survive multiple testing
Harvey et al. (2016) t > 3.0 Threshold
Main 3M spread t-stat3.66>3.0Exceeds Harvey threshold
Buy FF6 Alpha t-stat3.81>3.0Exceeds Harvey threshold
Hold FF6 Alpha t-stat4.43>3.0Strongest signal
Control Variable Robustness
Buy coef. (+B/M + Size)t = 2.71>2.0Signal robust to controls
Buy coef. change<1%stable0.0123 → 0.0126
Risk-Adjusted Performance
Buy Sharpe Ratio1.28>1.0Solid risk-adjusted return
Hold Sharpe Ratio1.40>1.0 Excellent risk-adjusted return
Hold Max Drawdown−9.8%>−20%Best drawdown among signals
Net return (3M L-S)+1.30%>0%After transaction costs
Out-of-Sample Validation
OOS 1M Retention107%>50%OOS (+1.03%) exceeds IS (+0.96%)
Notes: This table summarizes the key results against established significance and economic thresholds. BH = Benjamini–Hochberg false discovery rate correction applied to 15 simultaneous hypothesis tests. The Harvey threshold refers to Harvey et al. (2016)’s recommendation that t-statistics exceed 3.0 for new factors. Control variables include book-to-market ratio (B/M) and firm size (log market cap). OOS = out-of-sample validation period (May–September 2025). IS = in-sample period (January 2019–April 2025). All main spread tests and factor alphas (except long–short) exceed conventional significance thresholds after multiple testing correction. Note on Buy vs. Hold: Both Buy and Hold portfolios generate significant alpha exceeding the Harvey t > 3.0 threshold. Hold consistently outperforms Buy (t = 4.43 vs. t = 3.81, Sharpe 1.40 vs. 1.28), suggesting that stocks with stable analyst sentiment represent particularly strong investment candidates—likely reflecting analyst consensus on quality firms.

Appendix A.2. Risk-Adjusted Performance Metrics

Table A2 presents the comprehensive risk metrics for signal-based portfolios.
Table A2. Risk-adjusted performance metrics.
Table A2. Risk-adjusted performance metrics.
MetricBuyHoldSellLong–Short
Mean Return (Monthly)1.25%1.23%1.14%0.11%
Mean Return (Annual)15.0%14.7%13.7%1.3%
Std Dev (Monthly)2.41%2.14%2.73%1.38%
Sharpe Ratio (Ann.)1.281.400.99−0.64
Sortino Ratio1.752.041.35−0.94
Max Drawdown−12.2%−9.8%−15.7%−12.1%
N Observations16352860165676 months
Notes: The risk-free rate is assumed to be 0% for the Sharpe calculation. The Sortino ratio uses downside deviation. The sample period is January 2019–April 2025 (76 months). Event-based three-event lookback with Q25/Q75 expanding global quantiles methodology.

Appendix A.3. Market Benchmark Comparison

Table A3 compares portfolio performance to market and factor benchmarks.
Table A3. Market benchmark comparison.
Table A3. Market benchmark comparison.
Panel A: Benchmark Performance (January 2019–April 2025, 76 months)
BenchmarkMean MonthlyStd DevSharpe RatioCumulative
Market (MKT-RF)1.09%5.18%0.73106.5%
Price Momentum (MOM)0.14%4.33%0.113.5%
Panel B: Portfolio vs. Market Comparison
PortfolioSharpevs. MarketImprovement
Buy1.28+0.55+75%
Hold1.40+0.67+92%
Sell0.99+0.26+36%
Long–Short−0.64−1.37−188%
Notes: Sharpe ratios are annualized. MKT-RF = Market excess return. MOM = Fama–French momentum factor. Data from the Kenneth French Data Library. Buy and Hold portfolios substantially outperform both the market (75–92% higher Sharpe) and price momentum (analyst-sentiment Sharpe 1.28–1.40 vs. price momentum 0.11).

Appendix A.4. Fama–French Factor Regressions

Table A4 presents the factor regression results using the FF3, FF5, and FF6 models.
Table A4. Fama–French factor regressions.
Table A4. Fama–French factor regressions.
Panel A: Alpha Estimates by Model and Signal
ModelSignal α (Monthly) α (Annual)t-Statp-Value
FF3Buy1.18%14.2%4.100.0001
FF5Buy1.16%13.9%3.930.0002
FF6Buy1.13%13.6%3.810.0003
FF3Hold1.18%14.2%4.67<0.0001
FF5Hold1.17%14.0%4.53<0.0001
FF6Hold1.16%13.9%4.43<0.0001
FF3Sell1.07%12.8%3.280.0016
FF5Sell1.05%12.6%3.150.0024
FF6Sell1.04%12.5%3.090.0029
FF6Long–Short−0.11%−1.3%−0.650.517
Panel B: FF6 Factor Loadings (Buy Portfolio)
Factor β t-StatInterpretation
Mkt-RF−0.08−1.23Slight defensive tilt
SMB0.211.70Small positive size exposure
HML−0.06−0.63Minimal value exposure
RMW0.120.83Slight quality tilt
CMA−0.003−0.02Minimal investment factor exposure
MOM0.060.67Minimal momentum factor exposure
Notes: FF3 = Market, Size, Value. FF5 = FF3 + Profitability, Investment. FF6 = FF5 + Momentum. The sample period is January 2019–April 2025 (76 months). Factor data are from the Kenneth French Data Library. Bold indicates primary specification. The near-zero MOM loading ( β = 0.06, t = 0.67) confirms that analyst-sentiment momentum is orthogonal to price momentum.

Appendix A.5. Out-of-Sample Validation Details

Table A5 presents the month-by-month out-of-sample results.
Table A5. Out-of-sample validation: monthly results (May–September 2025).
Table A5. Out-of-sample validation: monthly results (May–September 2025).
Signal MonthHorizonN BuyN SellBuy RetSell RetSpread
May 20251M1624+3.96%+3.37%+0.59%
June 20251M2019−0.73%−2.24%+1.51%
July 20251M2744+4.01%+4.92%−0.91%
August 20251M1912+2.79%+0.28%+2.51%
September 20251M2418+1.46%+0.01%+1.45%
OOS Average1M+1.03%
In-Sample1M+0.96%
Retention Rate107%
Notes: The out-of-sample period uses pre-registered methodology with all parameters frozen from in-sample development (through April 2025). Signals are generated using an event-based three-event lookback with Q25/Q75 expanding global quantiles. Four of the five months show positive spreads. The 107% retention rate indicates no performance degradation relative to the in-sample results, comparing favorably to the typical 30–50% post-publication decay documented by McLean and Pontiff (2016).

Appendix A.6. Multiple Testing Corrections

Given the multiple hypothesis tests conducted, we applied the Benjamini–Hochberg (BH) procedure (Benjamini & Hochberg, 1995) to control the false discovery rate (FDR) at 5%. Table A6 reports the raw p-values, BH-adjusted p-values, and comparisons with established thresholds.
Table A6. Multiple testing corrections: results vs. thresholds.
Table A6. Multiple testing corrections: results vs. thresholds.
Testt-StatRaw pBH ThresholdBH-Adj. pSignificant?
Main Spread Results
3M Buy–Sell Spread3.660.00030.0200.0006Yes ***
1M Buy–Sell Spread3.070.00220.0300.0032Yes ***
2M Buy–Sell Spread3.070.00220.0330.0032Yes ***
Factor-Adjusted Alphas (FF6)
Buy Portfolio3.810.00030.0230.0006Yes ***
Hold Portfolio4.43<0.00010.0100.0002Yes ***
Sell Portfolio3.090.00290.0400.0037Yes ***
Long–Short−0.650.5170.0430.597No
Academic Thresholds
Harvey et al. (2016) t > 3.0Main spreads (t = 3.07–3.66) and Buy/Hold alphas (t = 3.81–4.43) exceed threshold
Bonferroni ( α /15)Threshold = 0.0033; Main 3M spread (p = 0.0003) significant
Notes: The Benjamini–Hochberg procedure was applied to 15 primary hypothesis tests (3 spread horizons + 12 factor alphas). BH threshold = rank × 0.05/15. 12 of 15 tests (80%) remain significant after FDR correction at 5%. Only the long–short portfolio alphas fail to reach significance, consistent with the near-zero spread after factor adjustment. Harvey et al. (2016) recommend t > 3.0 for new asset-pricing factors; our main results (t = 3.07–3.66) and individual portfolio alphas (t = 3.81–4.43) exceed this conservative threshold. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.

Appendix A.7. Transaction Cost Analysis

Table A7 presents gross and net returns accounting for realistic trading frictions.
Table A7. Transaction cost impact analysis.
Table A7. Transaction cost impact analysis.
PortfolioGross 3M ReturnNet 3M ReturnCost ImpactBreak-Even Cost
Buy4.68%4.39%−0.29%N/A
Hold3.44%3.15%−0.29%N/A
Sell2.74%2.45%−0.29%N/A
Long–Short1.94%1.30%−0.64%∼130 bps
Notes: Cost assumptions: 20 bps round-trip for long positions and 40 bps for short positions (including borrowing costs). Monthly rebalancing with an estimated 30% turnover. Net returns remain economically significant: the long–short strategy is profitable at costs up to ∼130 bps per round-trip. A buy-only implementation faces lower friction given single-sided execution.

Appendix A.8. Control Variable Regressions

To address concerns about confounding firm characteristics, we estimated regressions controlling for book-to-market ratio and firm size. Table A8 reports the results from pooled OLS regressions of 3-month forward returns on signal indicators with progressively added controls.
Table A8. Control variable regressions: signal effects with firm characteristics.
Table A8. Control variable regressions: signal effects with firm characteristics.
(1) Baseline(2) + B/M(3) + B/M + Size
Variable Coef. t-Stat Coef. t-Stat Coef. t-Stat
Constant0.0345 ***12.610.0382 ***11.440.01230.24
Signal_Buy0.0123 ***2.640.0126 ***2.690.0126 ***2.71
Signal_Sell−0.0070−1.52−0.0067−1.44−0.0066−1.43
B/M Ratio−0.0112 *−1.93−0.0102 *−1.68
Log (Market Cap)0.00100.51
Buy–Sell Spread1.94%1.92%1.93%
N612061206120
R20.0020.0030.003
Notes: The dependent variable is the 3-month forward return. Signal_Buy and Signal_Sell are indicator variables (Hold is the omitted category). The B/M ratio is calculated as the inverse of price-to-book and winsorized at 1%/99%. Robust (HC1) standard errors are reported. The sample is restricted to observations with available book value data. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively. Key finding: The Buy signal coefficient remains statistically significant (p = 0.007) after controlling for book-to-market and size, confirming that the momentum-based signal provides incremental predictive power beyond standard firm characteristics.

References

  1. Ali, U., & Hirshleifer, D. (2020). Shared analyst coverage: Unifying momentum spillover effects. Journal of Financial Economics, 136(3), 649–675. [Google Scholar] [CrossRef]
  2. Asquith, P., Mikhail, M. B., & Au, A. S. (2005). Information content of equity analyst reports. Journal of Financial Economics, 75(2), 245–282. [Google Scholar] [CrossRef]
  3. Barber, B. M., Lehavy, R., McNichols, M., & Trueman, B. (2001). Can investors profit from the prophets? Security analyst recommendations and stock returns. The Journal of Finance, 56(2), 531–563. [Google Scholar] [CrossRef]
  4. Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1), 289–300. [Google Scholar] [CrossRef]
  5. Brav, A., & Lehavy, R. (2003). An empirical analysis of analysts’ target prices: Short-term informativeness and long-term dynamics. The Journal of Finance, 58(5), 1933–1967. [Google Scholar] [CrossRef]
  6. Conrad, J. S., Cornell, B., Landsman, W. R., & Rountree, B. (2006). How do analyst recommendations respond to major news? Journal of Financial and Quantitative Analysis, 41(1), 25–49. [Google Scholar] [CrossRef]
  7. Green, T. C. (2006). The value of client access to analyst recommendations. Journal of Financial and Quantitative Analysis, 41(1), 1–24. [Google Scholar] [CrossRef]
  8. Gu, S., Kelly, B., & Xiu, D. (2020). Empirical asset pricing via machine learning. The Review of Financial Studies, 33(5), 2223–2273. [Google Scholar] [CrossRef]
  9. Harvey, C. R., Liu, Y., & Zhu, H. (2016). …and the cross-section of expected returns. Review of Financial Studies, 29(1), 5–68. [Google Scholar] [CrossRef]
  10. Hong, H., Lim, T., & Stein, J. C. (2000). Bad news travels slowly: Size, analyst coverage, and the profitability of momentum strategies. The Journal of Finance, 55(1), 265–295. [Google Scholar] [CrossRef]
  11. Huang, A. H., Wang, H., & Yang, Y. (2023). FinBERT: A large language model for extracting information from financial text. Contemporary Accounting Research, 40(2), 806–841. [Google Scholar] [CrossRef]
  12. Jegadeesh, N., Kim, J., Krische, S. D., & Lee, C. M. (2004). Analyzing the analysts: When do recommendations add value? The Journal of Finance, 59(3), 1083–1124. [Google Scholar] [CrossRef]
  13. Jegadeesh, N., & Kim, W. (2006). Value of analyst recommendations: International evidence. Journal of Financial Markets, 9(3), 274–309. [Google Scholar] [CrossRef]
  14. Jegadeesh, N., & Titman, S. (1993). Returns to buying winners and selling losers: Implications for stock market efficiency. The Journal of Finance, 48(1), 65–91. [Google Scholar] [CrossRef]
  15. Ke, Z. T., Kelly, B., & Xiu, D. (2019). Predicting returns with text data. NBER working paper no. w26186. Available online: https://ssrn.com/abstract=3446492 (accessed on 28 November 2025).
  16. Liddell, T. M., & Kruschke, J. K. (2018). Analyzing ordinal data with metric models: What could possibly go wrong? Journal of Experimental Social Psychology, 79, 328–348. [Google Scholar] [CrossRef]
  17. Lockwood, J., Lockwood, L., Miao, H., Uddin, M. R., & Li, K. (2023). Does analyst optimism fuel stock price momentum? Journal of Behavioral Finance, 24(4), 411–427. [Google Scholar] [CrossRef]
  18. Loh, R. K., & Stulz, R. M. (2011). When are analyst recommendation changes influential? The Review of Financial Studies, 24(2), 593–627. [Google Scholar] [CrossRef]
  19. Long, H., Zhu, R., Wang, C., Yao, Z., & Zaremba, A. (2025). The gap between you and your peers matters: The net peer momentum effect in China. Modern Finance, 3(3), 40–53. [Google Scholar] [CrossRef]
  20. Lopez-Lira, A., & Tang, Y. (2023). Can ChatGPT forecast stock price movements? Return predictability and large language models. Available online: https://ssrn.com/abstract=4412788 (accessed on 28 November 2025).
  21. Malmendier, U., & Shanthikumar, D. (2007). Are small investors naive about incentives? Journal of Financial Economics, 85(2), 457–489. [Google Scholar] [CrossRef]
  22. McCarthy, S., & Alaghband, G. (2024). Fin-ALICE: Artificial linguistic intelligence causal econometrics. Journal of Risk and Financial Management, 17(12), 537. [Google Scholar] [CrossRef]
  23. McLean, R. D., & Pontiff, J. (2016). Does academic research destroy stock return predictability? The Journal of Finance, 71(1), 5–32. [Google Scholar] [CrossRef]
  24. Womack, K. L. (1996). Do brokerage analysts’ recommendations have investment value? The Journal of Finance, 51(1), 137–167. [Google Scholar] [CrossRef]
Figure 1. Momentum normalization framework process flow. This diagram illustrates the complete momentum normalization pipeline from raw analyst grades to investment signals. The framework transforms diverse rating events through five integrated stages using an event-based lookback (comparing to the 3rd previous rating event) and expanding global quantile classification. The process maintains strict temporal integrity through past-only ECDF normalization and produces balanced signal distributions (27% Buy, 46% Hold, 27% Sell) through Q25/Q75 quantile thresholds calculated from all prior months.
Figure 1. Momentum normalization framework process flow. This diagram illustrates the complete momentum normalization pipeline from raw analyst grades to investment signals. The framework transforms diverse rating events through five integrated stages using an event-based lookback (comparing to the 3rd previous rating event) and expanding global quantile classification. The process maintains strict temporal integrity through past-only ECDF normalization and produces balanced signal distributions (27% Buy, 46% Hold, 27% Sell) through Q25/Q75 quantile thresholds calculated from all prior months.
Ijfs 14 00004 g001
Figure 3. Quarterly firm ranking stability across four sectors. Dark green cells indicate rank #1, orange indicates rank #5, and white indicates that the firm is not in the top 5 for that quarter. Consistent dark shading indicates stable top performance. The analysis covers quarterly periods from Q1 2019 through Q2 2025.
Figure 3. Quarterly firm ranking stability across four sectors. Dark green cells indicate rank #1, orange indicates rank #5, and white indicates that the firm is not in the top 5 for that quarter. Consistent dark shading indicates stable top performance. The analysis covers quarterly periods from Q1 2019 through Q2 2025.
Ijfs 14 00004 g003
Figure 4. Cross-sector firm expertise matrix. The left heatmap shows Buy–Sell spreads (red = negative, yellow = moderate, green = high); since these are top-5 performers by design, all spreads are positive and values fall in the green range. The right heatmap shows within-sector rankings (darker green = better rank). Rows are sorted by the number of sectors in which the firm appears in the top 5.
Figure 4. Cross-sector firm expertise matrix. The left heatmap shows Buy–Sell spreads (red = negative, yellow = moderate, green = high); since these are top-5 performers by design, all spreads are positive and values fall in the green range. The right heatmap shows within-sector rankings (darker green = better rank). Rows are sorted by the number of sectors in which the firm appears in the top 5.
Ijfs 14 00004 g004
Table 1. Analyst rating taxonomy—text to ordinal mapping.
Table 1. Analyst rating taxonomy—text to ordinal mapping.
Ordinal ValueRating CategorySpecific TermsEconomic Interpretation
5Strong BuyStrong Buy, Top Pick, Conviction BuyHighest-conviction positive ratings designating a firm’s best investment ideas
4BuyBuy, Overweight, Outperform, Add, Accumulate, Long Term Buy, Market Outperform, Sector Outperform, Positive, Above AverageStandard positive ratings implying expected returns exceeding relevant benchmarks
3HoldHold, Neutral, Equal Weight, Market Perform, Sector Perform, Peer Perform, In Line, Perform, Mixed, Average, Market Weight, Sector WeightNeutral ratings implying expected returns consistent with relevant benchmarks
2SellSell, Underweight, Underperform, Reduce, Sector Underperform, Market Underperform, NegativeStandard negative ratings implying expected returns below relevant benchmarks
1Strong SellStrong SellHighest-conviction negative ratings
Table 2. Illustrative workflow: transforming a raw analyst grade into a momentum signal.
Table 2. Illustrative workflow: transforming a raw analyst grade into a momentum signal.
StageInput Data and ContextMathematical OperationResult
1. Event LookbackCurrent Event (k): Goldman rates AAPL “Neutral” (3)
Reference Event ( k 3 ): Goldman rated AAPL “Buy” (4)
Context: A downgrade relative to the 3rd prior event.
Δ R = R k R k 3 Δ R = 3 4 −1
2. Firm HistoryGoldman’s Prior Changes ( N = 13 ):
{ 2 , 2 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , + 1 , + 1 , + 1 , + 2 }
Context: How rare is a −1 change for this specific firm?
Rank = N < + 0.5 N = Rank = 2 + ( 0.5 × 3 ) 3.5
3. NormalizationRank Position: 3.5
Total History (N): 13
Context: Convert rank to percentile score (ECDF).
M i j m = Rank N M i j m = 3.5 13 0.269
4. AggregationActive Firms for AAPL (Month m):
1. Goldman Sachs ( M = 0.269 )
2. Morgan Stanley ( M = 0.750 )
3. JPMorgan ( M = 0.420 )
M ¯ i m = 1 J j = 1 J M i j m M ¯ i m = 1.439 3 0.480
5. ClassificationStock Score: 0.480
Global Quantiles (Expanding Window):
Q 25 = 0.35 (Sell Threshold)
Q 75 = 0.65 (Buy Threshold)
Logic:   0.35 < 0.480 < 0.65 HOLD
Note: This example illustrates the processing of a Goldman Sachs downgrade of AAPL from “Buy” to “Neutral.” (1) Event Lookback captures the magnitude of the sentiment shift ( Δ = 1 ). (2) Normalization contextualizes this shift against Goldman’s own history, assigning a bearish momentum score (0.269). (3) Aggregation moderates this individual bearish view with bullish/neutral scores from other firms (Morgan Stanley, JPMorgan). (4) The final Classification places the aggregated score (0.480) within the interquartile range of the global distribution ( 0.35 0.65 ), resulting in a stable “Hold” signal rather than a knee-jerk “Sell.”
Table 3. Dataset summary statistics.
Table 3. Dataset summary statistics.
DatasetStatisticCount
Grades DatasetTotal Grade Events68,660
Post-2019 Grade Events42,015
Unique Stocks109
Unique Analyst Firms270
Unique Grade Terms31
Consensus DatasetTotal Observations7844
Post-2019 Observations7843
Unique Stocks110
Price DatasetTotal Monthly Prices17,308
Unique Stocks106
Aligned UniverseCommon Stocks (All Datasets)106
Momentum Model Coverage6151
Table 4. Forward portfolio returns by model classification (January 2019–April 2025).
Table 4. Forward portfolio returns by model classification (January 2019–April 2025).
ModelSignal1M Return2M Return3M ReturnObservations
Plurality Vote
Buy1.2%2.5%3.7%5929
Hold0.8%1.7%2.5%1645
Sell8.5%14.9%20.2%57
Buy–Sell Spread−7.3%−12.4%−16.5%
t-statistic−1.78−1.61−1.60
p-value0.0810.1120.115
Buy-Ratio
Buy1.2%2.5%3.7%4145
Hold1.2%2.5%3.6%1956
Sell1.0%2.0%3.1%1530
Buy–Sell Spread+0.2%+0.5%+0.6%
t-statistic0.641.141.01
p-value0.5210.2560.314
Momentum-Based
Buy1.84%3.11%4.68%1635
Hold1.14%2.34%3.44%2860
Sell0.87%1.74%2.74%1656
Buy–Sell Spread+0.96% ***+1.36% ***+1.94% ***
t-statistic3.073.073.66
p-value0.0020.002<0.001
Notes: The sample period is January 2019 to April 2025. Forward returns are calculated as ( P t + h / P t ) 1 , where signals generated in month t are evaluated against price changes over subsequent h months. The Plurality Vote and Buy-Ratio methods use monthly consensus recommendation data (7633 stock-months); momentum-based uses individual grade change data with an event-based 3-event lookback (6151 stock-months). Momentum signals are classified using expanding global quantiles (Q25/Q75) applied to past-only ECDF-normalized momentum scores. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.
Table 6. Lookback period sensitivity analysis.
Table 6. Lookback period sensitivity analysis.
Panel A: Calendar-Based Lookback (compare to rating N months ago, Q20/Q80 thresholds)
Lookback1M Spreadt-Stat2M Spreadt-Stat3M Spreadt-Stat
3M-calendar+0.41%1.31+0.58%1.29+1.48% **2.68
4M-calendar+0.63% *2.12+0.78%1.84+0.96%1.86
5M-calendar+0.39%1.43+0.42%1.10+0.94% *1.99
6M-calendar+0.56% *2.14+0.37%1.00+0.57%1.26
Panel B: Event-Based Lookback (compare to Nth previous rating event, Q25/Q75 thresholds)
Lookback1M Spreadt-Stat2M Spreadt-Stat3M Spreadt-Stat
2-event+0.89% **2.58+0.78%1.60+1.22% *2.07
3-event (selected)+0.96% ***3.07+1.36% ***3.07+1.94% ***3.66
4-event+0.67% *2.08+1.21% **2.67+1.52% **2.77
5-event+0.82% **2.57+1.19% **2.65+1.53% **2.84
Notes: Calendar-based lookback compares the current rating to the rating N months prior, whereas event-based lookback compares the current rating to the Nth previous rating event regardless of elapsed time. Calendar-based lookback uses Q20/Q80 thresholds (optimal for this method), and event-based lookback uses Q25/Q75 thresholds (optimal for this method). All tests employ expanding global quantiles with past-only ECDF normalization. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.
Table 7. Quantile threshold research.
Table 7. Quantile threshold research.
Panel A: Lookback Type Comparison (best threshold for each method)
Lookback TypeBest Threshold3M Spreadt-Statp-Value
Event-based (3-event)Q25/Q75 expanding1.94%3.660.0003
Calendar-based (3-month)Q20/Q80 expanding1.48%2.680.0074
Panel B: Threshold Configurations (Event-Based 3-Event Lookback)
ThresholdMethod3M Spreadt-Statp-ValueSignificant?
Q25/Q75Monthly cross-sectional0.44%0.810.417No
Q20/Q80Monthly cross-sectional0.34%0.560.575No
Q15/Q85Monthly cross-sectional0.09%0.130.899No
Q25/Q75Expanding global1.94%3.660.0003Yes ***
Q20/Q80Expanding global1.70%2.760.006Yes **
Q33/Q67Expanding global1.43%3.260.001Yes **
Q25/Q75Rolling (12 m window)1.82%3.270.001Ye s **
Notes: All configurations use past-only ECDF normalization (no look-ahead bias). Expanding global calculates quantile thresholds using all historical observations prior to each month. Rolling uses a 12-month lookback window. Monthly cross-sectional calculates thresholds from the current month’s cross-section only. Bold indicates the selected methodology. *, **, and *** indicate statistical significance at 10%, 5%, and 1% levels respectively.
Table 8. Top-5 analyst firms by sector (3-month Buy–Sell spread).
Table 8. Top-5 analyst firms by sector (3-month Buy–Sell spread).
SectorTop 5 Firms (Ranked)Best SpreadAgg. SpreadCoverage
Consumer DiscretionaryRBC Capital Markets (#1), TD Cowen (#2), Wedbush (#3), UBS (#4), Piper Sandler (#5)13.7%9.6%80%
Consumer StaplesTelsey Advisory Group (#1), Oppenheimer (#2), Morgan Stanley (#3), Bank of America (#4), JPMorgan (#5)6.8%3.1%100%
EnergyScotiabank (#1), Deutsche Bank (#2), Citigroup (#3), RBC Capital Markets (#4), Stifel (#5)8.5%6.1%100%
FinancialsRaymond James (#1), Oppenheimer (#2), Citigroup (#3), BMO Capital Markets (#4), Baird (#5)11.1%5.7%80%
GoldRaymond James (#1), RBC Capital Markets (#2), Deutsche Bank (#3), Scotiabank (#4), Citigroup (#5)13.8%8.3%80%
HealthcareWells Fargo (#1), Raymond James (#2), Needham (#3), Goldman Sachs (#4), TD Cowen (#5)5.4%4.3%90%
IndustrialsGuggenheim (#1), Credit Suisse (#2), Sanford C. Bernstein (#3), Mizuho (#4), KeyBanc (#5)7.5%5.1%100%
MaterialsInstinet (#1), Jefferies (#2), Mizuho (#3), Barclays (#4), Bank of America (#5)10.8%3.6%90%
Real EstateKeyBanc Capital Markets (#1), Raymond James (#2), Deutsche Bank (#3), RBC Capital Markets (#4), TD Cowen (#5)6.8%4.4%100%
TechnologyRBC Capital Markets (#1), Mizuho (#2), Piper Sandler (#3), Raymond James (#4), Deutsche Bank (#5)14.7%10.6%70%
TelecommunicationsRBC Capital Markets (#1), Morgan Stanley (#2), Oppenheimer (#3), Instinet (#4), UBS (#5)9.8%6.0%90%
UtilitiesCitigroup (#1), Goldman Sachs (#2), Guggenheim (#3), KeyBanc (#4), Scotiabank (#5)4.1%3.3%100%
Notes: Best Spread = the highest individual firm spread within each sector (from the #1 ranked firm); Agg. Spread = the combined Buy–Sell spread when pooling all signals from the sector’s top-5 firms (matches Figure 2); Coverage = the percentage of the top 10 stocks (by market capitalization) in each sector covered by the aggregated top-5 firms. Spread values represent 3-month forward returns (Buy signals minus Sell signals). Rankings are determined using firm-isolated momentum methodology with a minimum of 5 observations per signal category for most sectors (3 observations for the Gold sector due to lower coverage).
Table 9. Performance comparison—sector-specific top-5 vs. baseline methods.
Table 9. Performance comparison—sector-specific top-5 vs. baseline methods.
MethodologyMean Spread (Across Sectors)N (Obs)
Momentum: Sector-Specific Top-5 Firms5.84%5816
Momentum: Full Framework1.94%6151
Majority Vote−1.37%11,741
Buy-Ratio−0.10%11,741
Improvement Analysis (Sector-by-Sector Comparison):
ComparisonMean Spread Improvement
Top-5 vs. Majority Vote+7.21 percentage points
Top-5 vs. Buy-Ratio+5.94 percentage points
Notes: The top-5 spread represents the simple average across 12 sectors (5.84%), as reported in the cumulative sector summary. The weighted average, weighted by the number of observations, is 6.03%. Full momentum framework statistics are taken from the main paper, Table 4 (Section 5). Baseline method statistics are calculated from the full dataset (all analyst firms, all covered stocks). The sector-specific top-5 approach processes 5816 observations (95% of the full momentum sample of 6151) due to the requirement that signals come exclusively from each sector’s top-5 ranked firms.
Table 10. Risk-adjusted performance by signal.
Table 10. Risk-adjusted performance by signal.
MetricBuyHoldSellLong–Short
Monthly Return1.25%1.23%1.14%0.11%
Annualized Return15.0%14.7%13.7%1.3%
Annualized Volatility8.3%7.4%9.4%4.8%
Sharpe Ratio1.281.400.99−0.64
Sortino Ratio1.752.041.35−0.94
Max Drawdown−12.2%−9.8%−15.7%−12.1%
Notes: The risk metrics are calculated over 76 months (January 2019–April 2025). The Sharpe and Sortino ratios use annualized returns and volatility. Bold indicates the best performance.
Table 11. Fama–French six-factor loadings by signal.
Table 11. Fama–French six-factor loadings by signal.
FactorBuy ( β )Sell ( β )Same Direction?L/S ( β )
Alpha1.13% ***1.04% ***Both positive−0.11%
Mkt-RF−0.079−0.093Both negative+0.014
SMB0.2150.154Both positive+0.079
HML−0.063−0.109Both negative+0.033
RMW0.1160.082Both positive+0.049
CMA−0.003−0.002Both ≈zero+0.024
MOM0.0580.021Both positive+0.038
Notes: FF6 regression results over 76 months. Alpha is monthly. L/S = Long Buy, Short Sell. Significance: *** p < 0.001. All factor loadings are from the Kenneth French Data Library.
Table 12. Sector implementation viability.
Table 12. Sector implementation viability.
SectorTop-Firm StabilityCoverage3M SpreadRecommendation
Energy100%100%6.1%Sector-specific
Consumer Staples100%100%3.1%Sector-specific
Industrials92%100%5.1%Sector-specific
Real Estate92%100%4.4%Sector-specific
Utilities92%100%3.3%Sector-specific
Telecom100%90%6.0%Baseline (incomplete coverage)
Materials96%90%3.6%Baseline (incomplete coverage)
Health Care65%90%4.3%Baseline (incomplete coverage)
Financials100%80%5.7%Baseline (incomplete coverage)
Gold100%80%8.3%Baseline (incomplete coverage)
Consumer Disc.85%80%9.6%Baseline (incomplete coverage)
Technology77%70%10.6%Baseline (incomplete coverage)
Notes: Top-Firm Stability = the percentage of quarters (out of 26) where the sector’s most stable analyst firm appeared in the top-5 rankings. Coverage = the percentage of top-10 market capitalization stocks covered by the sector’s top-5 firms. Spread = the 3-month Buy–Sell spread using the sector-specific approach. The sector-specific recommendation requires both ≥90% stability and 100% coverage.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

McCarthy, S.; Alaghband, G. A Momentum-Based Normalization Framework for Generating Profitable Analyst Sentiment Signals. Int. J. Financial Stud. 2026, 14, 4. https://doi.org/10.3390/ijfs14010004

AMA Style

McCarthy S, Alaghband G. A Momentum-Based Normalization Framework for Generating Profitable Analyst Sentiment Signals. International Journal of Financial Studies. 2026; 14(1):4. https://doi.org/10.3390/ijfs14010004

Chicago/Turabian Style

McCarthy, Shawn, and Gita Alaghband. 2026. "A Momentum-Based Normalization Framework for Generating Profitable Analyst Sentiment Signals" International Journal of Financial Studies 14, no. 1: 4. https://doi.org/10.3390/ijfs14010004

APA Style

McCarthy, S., & Alaghband, G. (2026). A Momentum-Based Normalization Framework for Generating Profitable Analyst Sentiment Signals. International Journal of Financial Studies, 14(1), 4. https://doi.org/10.3390/ijfs14010004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop