Scientific Impact Prediction via Virtual Geography Hawkes Process
Abstract
1. Introduction
1.1. Background
1.2. Research Aim and Objectives
1.3. Contributions
- C1
- Virtual-geography Hawkes modeling. We propose a Hawkes-process formulation in which excitation is modulated by a virtual-geography structure, offering an interpretable latent organization of interaction strength beyond fixed graph kernels or purely feature-driven covariates.
- C2
- Unified modeling of heterogeneous interactions. We provide a unified event-driven framework that can incorporate multiple academic interaction channels (e.g., citations and collaboration-related signals) through a shared intensity construction, instead of handling each relation with separate ad hoc models.
- C3
- Learning procedure and feasibility demonstration. We present a practical learning pipeline for the proposed model and provide an initial empirical validation demonstrating feasibility and the added value of the virtual-geography mechanism within the Communication scope.
2. Related Work
2.1. Citation-Impact and Scientometric Models
2.2. Point-Process and Hawkes-Process Modeling of Citation Dynamics
2.3. Network-Aware Extensions and Representation-Based Approaches
2.4. Positioning of VG-Hawkes
3. Method
3.1. Concept
3.2. Overview of the Proposed Method
3.3. Hawkes Process
3.4. Virtual Network Hawkes Process
3.5. Learning Algorithm
| Algorithm 1 EM-based Parameter Estimation for the Virtual Network Hawkes Process |
|
4. Results
4.1. Dataset
4.2. Validations
5. Discussion and Limitations
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Seki, Y.; Matsuo, Y. Citation Count Prediction Using Citation Information. In Proceedings of the 25th Annual Conference of the Japanese Society for Artificial Intelligence, Morioka, Japan, 1–3 June 2011. [Google Scholar]
- Hirako, J.; Sasano, R.; Takeda, K. CiMaTe: Citation Count Prediction Effectively Leveraging the Main Text. arXiv 2024, arXiv:2410.04404. Available online: https://arxiv.org/html/2410.04404v1 (accessed on 20 January 2026).
- Mori, J.; Hara, T.; Sakaki, T.; Kajikawa, Y.; Sakata, I. Identifying Central Researchers in Emerging Research Areas via Co-authorship Network Analysis of Large-Scale Academic Data. In Proceedings of the 29th Annual Conference of the Japanese Society for Artificial Intelligence, Hakodate, Japan, 30 May–2 June 2015. [Google Scholar]
- Garfield, E. Citation Analysis as a Tool in Journal Evaluation. Science 1972, 178, 471–479. [Google Scholar] [CrossRef] [PubMed]
- Hirsch, J.E. An Index to Quantify an Individual’s Scientific Research Output. Proc. Natl. Acad. Sci. USA 2005, 102, 16569–16572. [Google Scholar] [CrossRef] [PubMed]
- Egghe, L. An Improvement of the h-Index: The g-Index. Scientometrics 2006, 69, 131–152. [Google Scholar] [CrossRef]
- Wang, L.; Du, W.; Chen, Z. Multi-Feature-Enhanced Academic Paper Recommendation Model with Knowledge Graph. Appl. Sci. 2024, 14, 5022. [Google Scholar] [CrossRef]
- Wang, D.; Song, C.; Barabási, A.L. Quantifying Long-Term Scientific Impact. Science 2013, 342, 127–132. [Google Scholar] [CrossRef] [PubMed]
- Sinatra, R.; Wang, D.; Deville, P.; Song, C.; Barabási, A.L. Quantifying the Evolution of Individual Scientific Impact. Science 2016, 354, aaf5239. [Google Scholar] [CrossRef] [PubMed]
- Hawkes, A.G. Point Spectra of Some Mutually Exciting Point Processes. J. R. Stat. Soc. Ser. Stat. Methodol. 1971, 33, 438–443. [Google Scholar] [CrossRef]
- Yu, Q.; Long, C.; Lv, Y.; Shao, H.; He, P.; Duan, Z. Predicting Co-Author Relationship in Medical Co-Authorship Networks. PLoS ONE 2014, 9, e101214. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Z.; Liu, W.; Qian, Y.; Nie, L.; Yin, Y.; Zhang, Y. Identifying advisor-advisee relationships from co-author networks via a novel deep model. Inf. Sci. 2018, 466, 258–269. [Google Scholar] [CrossRef]
- Asatani, K.; Mori, J.; Ochi, M.; Sakata, I. Detecting trends in academic research from a citation network using network representation learning. PLoS ONE 2018, 13, e0197260. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Tang, T.; Xia, F.; Gong, Z.; Chen, Z.; Liu, H. Collaborative Filtering With Network Representation Learning for Citation Recommendation. IEEE Trans. Big Data 2022, 8, 1233–1246. [Google Scholar] [CrossRef]
- Wang, S.; Gai, K.; Yu, J.; Zhang, Z.; Zhu, L. PraVFed: Practical Heterogeneous Vertical Federated Learning via Representation Learning. IEEE Trans. Inf. Forensics Secur. 2025, 20, 2693–2705. [Google Scholar] [CrossRef]
- Zhuang, J.; Mateu, J. A Semiparametric Spatiotemporal Hawkes-Type Point Process Model with Periodic Background for Crime Data. J. R. Stat. Soc. Ser. A 2019, 182, 919–942. [Google Scholar] [CrossRef]
- Bernabeu, A.; Zhuang, J.; Mateu, J. Spatio-Temporal Hawkes Point Processes: A Review. J. Agric. Biol. Environ. Stat. 2025, 30, 89–119. [Google Scholar] [CrossRef]
- Laub, P.J.; Lee, Y.; Pollett, P.K.; Taimre, T. Hawkes Models and Their Applications. Annu. Rev. Stat. Its Appl. 2025, 12, 233–258. [Google Scholar] [CrossRef]
- Zhuang, J.; Ogata, Y.; Vere-Jones, D. Analyzing Earthquake Clustering Features by Using Stochastic Reconstruction. J. Geophys. Res. 2004, 109, B05301. [Google Scholar] [CrossRef]
- Zhuang, J. Second-Order Residual Analysis of Spatiotemporal Point Processes and Applications in Model Evaluation. J. R. Stat. Soc. Ser. Stat. Methodol. 2006, 68, 635–653. [Google Scholar] [CrossRef]
- Lange, K. MM Optimization Algorithms; SIAM: Philadelphia, PA, USA, 2016. [Google Scholar]




| Title | Publisher (or Conference) | Time | Co-Author |
|---|---|---|---|
| Continuous-time Hierarchical Reinforcement Learning | |||
| Hierarchically Optimal Average Reward Reinforcement Learning | ICML | 8 July 2002 | |
| Hierarchical Policy Gradient Algorithms | ICML | 2003 | |
| Bayesian Actor-Critic Algorithms | ICML | 2007 | |
| Hierarchical Average Reward Reinforcement Learning | JMLR | 2007 | |
| Regularized Policy Iteration | NeurIPS | 2008 | Amir-massoud Farahmand (Offspring) |
| Natural Actor-Critic Algorithms | Automatica | 2009 | |
| Analysis of a Classification-based Policy Iteration Algorithm | ICML | 2010 | Alessandro Lazaric (Offspring) |
| Bayesian Multi-task Reinforcement Learning | ICML | 2010 | Alessandro Lazaric (Offspring) |
| Finite-sample Analysis of LSTD | ICML | 21 July 2010 | Alessandro Lazaric (Offspring) |
| LSTD with Random Projections | NeurIPS | 2010 | Alessandro Lazaric (Offspring) |
| Finite-sample Analysis of Lasso-TD | ICML | 2011 | Alessandro Lazaric (Offspring) |
| Speedy Q-learning | NeurIPS | 2011 | |
| Classification-based Policy Iteration with a Critic | ICML | 5 May 2011 | |
| Approximate Modified Policy Iteration | AAAI | 2012 | |
| Finite-sample Analysis of Least-squares Policy Iteration | JMLR | 1 October 2012 | Alessandro Lazaric (Offspring) |
| Actor-Critic Algorithms for Risk-sensitive MDPs | NeurIPS | 2013 | Prashanth L.A. (Offspring) |
| Approximate Dynamic Programming Finally Performs Well in the Game of Tetris | NeurIPS | 2013 | |
| Algorithms for CVaR Optimization in MDPs | NeurIPS | 2014 | Yinlam Chow (Offspring) |
| High Confidence Policy Improvement | ICML | 2015 | Philip Thomas (Offspring) |
| Approximate Modified Policy Iteration and its Application to the Game of Tetris | JMLR | 2015 | |
| Maximum Entropy Semi-Supervised Inverse Reinforcement Learning | IJCAI | 2015 | Alessandro Lazaric (Offspring) |
| Personalized Ad Recommendation Systems for Life-time Value Optimization with Guarantees | IJCAI | 2015 | Philip Thomas (Offspring) |
| Policy Gradient for Coherent Risk Measures | NeurIPS | 2015 | Aviv Tamar (Co), Yinlam Chow (Offspring) |
| Proximal Gradient Temporal Difference Learning Algorithms | IJCAI | 2016 | |
| Regularized Policy Iteration with Nonparametric Function Spaces | JMLR | 1 January 2016 | |
| Analysis of Classification-based Policy Iteration Algorithms | JMLR | 2016 | Alessandro Lazaric (Offspring) |
| Safe Policy Improvement by Minimizing Robust Baseline Regret | NeurIPS | 2016 | Yinlam Chow (Offspring) |
| Bayesian Policy Gradient and Actor-Critic Algorithms | JMLR | 2016 | |
| Improved Learning Complexity in Combinatorial Pure Exploration Bandits | AISTATS | 2 May 2016 | Alessandro Lazaric (Offspring) |
| Sequential Decision-making with Coherent Risk | IEEE TAC | 2017 | Aviv Tamar (Co), Yinlam Chow (Offspring) |
| Risk-constrained Reinforcement Learning with Percentile Risk Criteria | JMLR | 2018 | Yinlam Chow (Offspring) |
| More Robust Doubly Robust Off-policy Evaluation | ICML | 2018 | Yinlam Chow (Offspring) |
| A Lyapunov-based Approach to Safe Reinforcement Learning | NeurIPS | 2018 | Yinlam Chow (Offspring) |
| Path Consistency Learning in Tsallis Entropy Regularized MDPs | ICML | 3 July 2018 | Yinlam Chow (Offspring) |
| Garbage In, Reward Out: Bootstrapping Exploration in Multi-armed Bandits | ICML | 2019 | |
| Tight Regret Bounds for Model-based Reinforcement Learning with Greedy Policies | NeurIPS | 2019 | Yinlam Chow (Offspring) |
| Risk-sensitive Generative Adversarial Imitation Learning | AISTATS | 11 April 2019 | Yinlam Chow (Offspring) |
| Optimizing Over a Restricted Policy Class in MDPs | AISTATS | 11 April 2019 | |
| Predictive Coding for Locally-linear Control | ICML | 21 November 2020 | Yinlam Chow (Offspring) |
| Title | Citations | Citations (1 Year) | Citations (3 Years) | Citations (5 Years) |
|---|---|---|---|---|
| Continuous-time Hierarchical Reinforcement Learning | 55 | 1 | 4 | 14 |
| Hierarchically Optimal Average Reward Reinforcement Learning | 21 | 1 | 4 | 7 |
| Hierarchical Policy Gradient Algorithms | 79 | 1 | 7 | 17 |
| Bayesian Actor-Critic Algorithms | 75 | 1 | 8 | 23 |
| Hierarchical Average Reward Reinforcement Learning | 42 | 8 | 13 | 16 |
| Regularized Policy Iteration | 163 | 1 | 15 | 45 |
| Natural Actor-Critic Algorithms | 1084 | 5 | 43 | 127 |
| Analysis of a Classification-based Policy Iteration Algorithm | 88 | 2 | 19 | 41 |
| Bayesian Multi-task Reinforcement Learning | 143 | 1 | 14 | 35 |
| Finite-sample Analysis of LSTD | 87 | 3 | 25 | 40 |
| LSTD with Random Projections | 74 | 1 | 14 | 28 |
| Finite-sample Analysis of Lasso-TD | 57 | 4 | 23 | 33 |
| Speedy Q-learning | 215 | 4 | 11 | 27 |
| Classification-based Policy Iteration with a Critic | 30 | 9 | 17 | 23 |
| Approximate Modified Policy Iteration | 62 | 1 | 16 | 30 |
| Finite-sample Analysis of Least-squares Policy Iteration | 133 | 3 | 10 | 22 |
| Actor-Critic Algorithms for Risk-sensitive MDPs | 310 | 1 | 2 | 17 |
| Approximate Dynamic Programming Finally Performs Well in the Game of Tetris | 78 | 1 | 13 | 35 |
| Algorithms for CVaR Optimization in MDPs | 370 | 4 | 17 | 52 |
| High Confidence Policy Improvement | 220 | 2 | 28 | 76 |
| Approximate Modified Policy Iteration and its Application to the Game of Tetris | 154 | 2 | 11 | 37 |
| Maximum Entropy Semi-Supervised Inverse Reinforcement Learning | 38 | 2 | 10 | 22 |
| Personalized Ad Recommendation Systems for Life-time Value Optimization with Guarantees | 200 | 3 | 22 | 61 |
| Policy Gradient for Coherent Risk Measures | 142 | 2 | 8 | 23 |
| Proximal Gradient Temporal Difference Learning Algorithms | 32 | 1 | 8 | 16 |
| Regularized Policy Iteration with Nonparametric Function Spaces | 125 | 2 | 12 | 41 |
| Analysis of Classification-based Policy Iteration Algorithms | 52 | 1 | 10 | 23 |
| Safe Policy Improvement by Minimizing Robust Baseline Regret | 161 | 1 | 13 | 55 |
| Bayesian Policy Gradient and Actor-Critic Algorithms | 61 | 1 | 8 | 26 |
| Improved Learning Complexity in Combinatorial Pure Exploration Bandits | 45 | 2 | 14 | 30 |
| Sequential Decision-making with Coherent Risk | 80 | 6 | 23 | 57 |
| Risk-constrained Reinforcement Learning with Percentile Risk Criteria | 562 | 3 | 43 | 181 |
| More Robust Doubly Robust Off-policy Evaluation | 275 | 6 | 96 | 180 |
| A Lyapunov-based Approach to Safe Reinforcement Learning | 579 | 5 | 114 | 355 |
| Path Consistency Learning in Tsallis Entropy Regularized MDPs | 46 | 2 | 21 | 31 |
| Garbage In, Reward Out: Bootstrapping Exploration in Multi-armed Bandits | 78 | 6 | 39 | 68 |
| Tight Regret Bounds for Model-based Reinforcement Learning with Greedy Policies | 78 | 5 | 46 | 70 |
| Risk-sensitive Generative Adversarial Imitation Learning | 34 | 3 | 20 | 29 |
| Optimizing Over a Restricted Policy Class in MDPs | 12 | 3 | 9 | 11 |
| Predictive Coding for Locally-linear Control | 24 | 2 | 16 | 24 |
| Title | Publisher (or Conference) | Date | Co-Author |
|---|---|---|---|
| Reinforcement learning in continuous action spaces through sequential monte carlo methods | NeurIPS | 2007 | |
| Transfer of samples in batch reinforcement learning | ICML | 5 July 2008 | |
| Finite-sample analysis of LSTD | ICML | 21 June 2010 | M. Ghavamzadeh (Cluster) |
| Analysis of a Classification-based Policy Iteration Algorithm | ICML | 2010 | M. Ghavamzadeh (Cluster) |
| Bayesian Multi-task Reinforcement Learning | ICML | 2010 | M. Ghavamzadeh (Cluster) |
| LSTD with random projections | NeurIPS | 2010 | M. Ghavamzadeh (Cluster) |
| Finite-sample analysis of Lasso-TD | ICML | 2011 | M. Ghavamzadeh (Cluster) |
| Transfer from multiple MDPs | NeurIPS | 2011 | |
| Risk-aversion in multi-armed bandits | NeurIPS | 2012 | |
| Finite-sample analysis of least-squares policy iteration | JMLR | 1 October 2012 | M. Ghavamzadeh (Cluster) |
| Sequential transfer in multi-armed bandit with finite set of models | NeurIPS | 2013 | |
| Exploiting easy data in online optimization | NeurIPS | 2014 | |
| Best-arm identification in linear bandits | NeurIPS | 2014 | |
| Sparse multi-task reinforcement learning | NeurIPS | 2014 | |
| Maximum entropy semi-supervised inverse reinforcement learning | IJCAI | 25 July 2015 | M. Ghavamzadeh (Cluster) |
| Direct policy iteration with demonstrations | IJCAI | 25 July 2015 | |
| Analysis of Classification-based Policy Iteration Algorithms | JMLR | 2016 | M. Ghavamzadeh (Cluster) |
| Improved Learning Complexity in Combinatorial Pure Exploration Bandits | AISTATS | 2 May 2016 | M. Ghavamzadeh (Cluster) |
| Regret minimization in MDPs with options without prior knowledge | NeurIPS | 2017 | |
| Near optimal exploration-exploitation in non-communicating Markov decision processes | NeurIPS | 2018 | |
| Fighting boredom in recommender systems with linear reinforcement learning | NeurIPS | 2018 | |
| Efficient bias-span-constrained exploration-exploitation in reinforcement learning | ICML | 3 July 2018 | |
| Improved regret bounds for Thompson sampling in linear quadratic control problems | ICML | 3 July 2018 | Marc Abeille (Offspring) |
| A structured prediction approach for generalization in cooperative multi-agent reinforcement learning | NeurIPS | 2019 | |
| Exploration bonus for regret minimization in discrete and continuous average reward MDPs | NeurIPS | 2019 | |
| Regret bounds for learning state representations in reinforcement learning | NeurIPS | 2019 | |
| Limiting extrapolation in linear approximate value iteration | NeurIPS | 2019 | E. Brunskill (Co-author) |
| Provably efficient reward-agnostic navigation with linear value iteration | NeurIPS | 2020 | E. Brunskill (Co-author) |
| Improved sample complexity for incremental autonomous exploration in MDPs | NeurIPS | 2020 | Jean Tarbouriech (Offspring) |
| Learning near optimal policies with low inherent Bellman error | ICML | 21 November 2020 | E. Brunskill (Co-author) |
| Frequentist regret bounds for randomized least-squares value iteration | AISTATS | 3 June 2020 | E. Brunskill (Co-author) |
| Active model estimation in Markov decision processes | CUAI | 2020 | Jean Tarbouriech (Offspring) |
| No-regret exploration in goal-oriented reinforcement learning | ICML | 21 November 2020 | Jean Tarbouriech (Offspring) |
| Efficient optimistic exploration in linear-quadratic regulators via Lagrangian relaxation | ICML | 21 November 2020 | Marc Abeille (Offspring) |
| Reinforcement learning with prototypical representations | ICML | 1 July 2021 | Denis Yarats (Offspring) |
| Mastering visual continuous control: Improved data-augmented reinforcement learning | arXiv | 20 July 2021 | Denis Yarats (Offspring) |
| A provably efficient sample collection strategy for reinforcement learning | NeurIPS | 6 December 2021 | Jean Tarbouriech (Offspring) |
| Stochastic shortest path: Minimax, parameter-free and towards horizon-free regret | NeurIPS | 6 December 2021 | Jean Tarbouriech (Offspring) |
| Reinforcement learning in linear MDPs: Constant regret and representation selection | NeurIPS | 6 December 2021 | Matteo Papini (Offspring) |
| Adaptive multi-goal exploration | AISTATS | 3 May 2022 | Jean Tarbouriech (Offspring) |
| Title | Citations | Co-Author | Citations (1 Year) | Citations (3 Years) | Citations (5 Years) |
|---|---|---|---|---|---|
| Reinforcement learning in continuous action spaces through sequential monte carlo methods | 197 | 6 | 26 | 45 | |
| Transfer of samples in batch reinforcement learning | 209 | 4 | 13 | 28 | |
| Finite-sample analysis of LSTD | 87 | M. Ghavamzadeh (Cluster) | 3 | 25 | 40 |
| Analysis of a Classification-based Policy Iteration Algorithm | 88 | M. Ghavamzadeh (Cluster) | 2 | 19 | 41 |
| Bayesian Multi-task Reinforcement Learning | 143 | M. Ghavamzadeh (Cluster) | 1 | 14 | 35 |
| LSTD with random projections | 73 | M. Ghavamzadeh (Cluster) | 6 | 17 | 30 |
| Finite-sample analysis of Lasso-TD | 57 | M. Ghavamzadeh (Cluster) | 4 | 23 | 33 |
| Transfer from multiple MDPs | 62 | 4 | 13 | 22 | |
| Risk-aversion in multi-armed bandits | 186 | 5 | 17 | 27 | |
| Finite-sample analysis of least-squares policy iteration | 133 | M. Ghavamzadeh (Cluster) | 3 | 10 | 22 |
| Sequential transfer in multi-armed bandit with finite set of models | 121 | 1 | 18 | 32 | |
| Exploiting easy data in online optimization | 62 | 1 | 19 | 32 | |
| Best-arm identification in linear bandits | 212 | 2 | 21 | 45 | |
| Sparse multi-task reinforcement learning | 83 | 1 | 10 | 22 | |
| Maximum entropy semi-supervised inverse reinforcement learning | 38 | M. Ghavamzadeh (Cluster) | 2 | 10 | 22 |
| Direct policy iteration with demonstrations | 44 | 2 | 12 | 26 | |
| Analysis of Classification-based Policy Iteration Algorithms | 52 | M. Ghavamzadeh (Cluster) | 1 | 10 | 23 |
| Improved Learning Complexity in Combinatorial Pure Exploration Bandits | 45 | M. Ghavamzadeh (Cluster) | 2 | 14 | 30 |
| Regret minimization in MDPs with options without prior knowledge | 30 | 2 | 9 | 16 | |
| Near optimal exploration-exploitation in non-communicating Markov decision processes | 49 | 1 | 12 | 27 | |
| Fighting boredom in recommender systems with linear reinforcement learning | 53 | 3 | 24 | 50 | |
| Efficient bias-span-constrained exploration-exploitation in reinforcement learning | 115 | 1 | 18 | 61 | |
| Improved regret bounds for Thompson sampling in linear quadratic control problems | 106 | Marc Abeille (Offspring) | 1 | 45 | 87 |
| A structured prediction approach for generalization in cooperative multi-agent reinforcement learning | 27 | 1 | 7 | 21 | |
| Exploration bonus for regret minimization in discrete and continuous average reward MDPs | 44 | 1 | 24 | 36 | |
| Regret bounds for learning state representations in reinforcement learning | 13 | 2 | 9 | 13 | |
| Limiting extrapolation in linear approximate value iteration | 39 | E. Brunskill (Co-author) | 10 | 23 | 38 |
| Provably efficient reward-agnostic navigation with linear value iteration | 61 | E. Brunskill (Co-author) | 1 | 16 | 47 |
| Improved sample complexity for incremental autonomous exploration in MDPs | 17 | Jean Tarbouriech (Offspring) | 1 | 10 | 17 |
| Learning near optimal policies with low inherent Bellman error | 234 | E. Brunskill (Co-author) | 1 | 67 | 198 |
| Frequentist regret bounds for randomized least-squares value iteration | 146 | E. Brunskill (Co-author) | 2 | 49 | 126 |
| Active model estimation in Markov decision processes | 29 | Jean Tarbouriech (Offspring) | 1 | 5 | 23 |
| No-regret exploration in goal-oriented reinforcement learning | 43 | Jean Tarbouriech (Offspring) | 3 | 29 | 42 |
| Efficient optimistic exploration in linear-quadratic regulators via Lagrangian relaxation | 40 | Marc Abeille (Offspring) | 1 | 28 | 40 |
| Reinforcement learning with prototypical representations | 210 | Denis Yarats (Offspring) | 1 | 90 | 209 |
| Mastering visual continuous control: Improved data-augmented reinforcement learning | 259 | Denis Yarats (Offspring) | 1 | 78 | 258 |
| A provably efficient sample collection strategy for reinforcement learning | 18 | Jean Tarbouriech (Offspring) | 1 | 6 | 18 |
| Stochastic shortest path: Minimax, parameter-free and towards horizon-free regret | 34 | Jean Tarbouriech (Offspring) | 3 | 28 | |
| Reinforcement learning in linear MDPs: Constant regret and representation selection | 19 | Matteo Papini (Offspring) | 6 | 19 | |
| Adaptive multi-goal exploration | 4 | Jean Tarbouriech (Offspring) | 1 | 4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Ganeshbabu, B.; Liu, X.; Matono, A.; Kim, K.-S.; Shen, X. Scientific Impact Prediction via Virtual Geography Hawkes Process. Appl. Sci. 2026, 16, 2085. https://doi.org/10.3390/app16042085
Ganeshbabu B, Liu X, Matono A, Kim K-S, Shen X. Scientific Impact Prediction via Virtual Geography Hawkes Process. Applied Sciences. 2026; 16(4):2085. https://doi.org/10.3390/app16042085
Chicago/Turabian StyleGaneshbabu, Babusurya, Xin Liu, Akiyoshi Matono, Kyoung-Sook Kim, and Xun Shen. 2026. "Scientific Impact Prediction via Virtual Geography Hawkes Process" Applied Sciences 16, no. 4: 2085. https://doi.org/10.3390/app16042085
APA StyleGaneshbabu, B., Liu, X., Matono, A., Kim, K.-S., & Shen, X. (2026). Scientific Impact Prediction via Virtual Geography Hawkes Process. Applied Sciences, 16(4), 2085. https://doi.org/10.3390/app16042085

