The Black Box Paradox: AI Models and the Epistemological Crisis in Motor Control Research
Abstract
1. Introduction
2. The Nature of DL Black Boxes
2.1. Defining the Black Box Problem
2.2. The Performance-Interpretability Trade-Off
2.3. Limitations of Current Interpretability Techniques
3. The Motor Control Conundrum: Why AI Models Might Exacerbate Foundational Uncertainties
3.1. The Unresolved Theoretical Landscape
3.2. The Parameter-Variable Distinction Crisis
3.3. Competing Frameworks: Irreconcilable or Complementary?
3.4. The Data-Theory Gap in Motor Control Neuroscience Research
3.5. Why DL Might Struggle to Resolve These Debates
3.6. The Clinical Stakes
3.7. The Path Forward Requires Theoretical Clarity
4. The Epistemological Challenge in Neuroscience
4.1. Scientific Understanding vs. Predictive Performance
4.2. The Problem of Multiple Realizability
4.3. The Circularity Problem: Explaining the Brain with Artificial Brains
5. The Illusion of Understanding
5.1. Confusing Correlation with Causation
5.2. The Anthropomorphization of Artificial Networks
5.3. The Seductive Appeal of Complexity
6. Looking Forward
6.1. The Need for Epistemic Humility
6.2. Rethinking Evaluation Criteria
6.3. Education and Training
6.4. Developing New Theoretical Frameworks
6.5. Fostering Interdisciplinary Collaboration
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
ML | Machine Learning |
DL | Deep Learning |
ANN | Artificial Neural Network |
References
- Dzobo, K.; Adotey, S.; Thomford, N.E.; Dzobo, W. Integrating Artificial and Human Intelligence: A Partnership for Responsible Innovation in Biomedical Engineering and Medicine. Omics J. Integr. Biol. 2020, 24, 247–263. [Google Scholar] [CrossRef] [PubMed]
- Macpherson, T.; Churchland, A.; Sejnowski, T.; DiCarlo, J.; Kamitani, Y.; Takahashi, H.; Hikida, T. Natural and Artificial Intelligence: A Brief Introduction to the Interplay Between AI and Neuroscience Research. Neural Netw. 2021, 144, 603–613. [Google Scholar] [CrossRef] [PubMed]
- Onciul, R.; Tataru, C.-I.; Dumitru, A.V.; Crivoi, C.; Serban, M.; Covache-Busuioc, R.-A.; Radoi, M.P.; Toader, C. Artificial Intelligence and Neuroscience: Transformative Synergies in Brain Research and Clinical Applications. J. Clin. Med. 2025, 14, 550. [Google Scholar] [CrossRef]
- Tekin, U.; Dener, M. A Bibliometric Analysis of Studies on Artificial Intelligence in Neuroscience. Front. Neurol. 2025, 16, 1474484. [Google Scholar] [CrossRef]
- Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What We Know and What Is Left to Attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. [Google Scholar] [CrossRef]
- Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
- Bahri, Y.; Kadmon, J.; Pennington, J.; Schoenholz, S.S.; Sohl-Dickstein, J.; Ganguli, S. Statistical Mechanics of Deep Learning. Annu. Rev. Condens. Matter Phys. 2020, 11, 501–528. [Google Scholar] [CrossRef]
- Nalisnick, E.; Smyth, P.; Tran, D. A Brief Tour of Deep Learning from a Statistical Perspective. Annu. Rev. Stat. Appl. 2023, 10, 219–246. [Google Scholar] [CrossRef]
- Vyas, S.; Golub, M.D.; Sussillo, D.; Shenoy, K.V. Computation Through Neural Population Dynamics. Annu. Rev. Neurosci. 2020, 43, 249–275. [Google Scholar] [CrossRef]
- Aliferis, C.; Simon, G. Lessons Learned from Historical Failures, Limitations and Successes of AI/ML in Healthcare and the Health Sciences. Enduring Problems, and the Role of Best Practices. In Artificial Intelligence and Machine Learning in Health Care and Medical Sciences: Best Practices and Pitfalls; Simon, G.J., Aliferis, C., Eds.; Springer International Publishing: Cham, Switzerland, 2024; pp. 543–606. ISBN 978-3-031-39355-6. [Google Scholar]
- Kanwisher, N.; Khosla, M.; Dobs, K. Using Artificial Neural Networks to Ask ‘Why’ Questions of Minds and Brains. Trends Neurosci. 2023, 46, 240–254. [Google Scholar] [CrossRef]
- Ji, J.; Qiu, T.; Chen, B.; Zhang, B.; Lou, H.; Wang, K.; Duan, Y.; He, Z.; Vierling, L.; Hong, D.; et al. AI Alignment: A Comprehensive Survey. arXiv 2025. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; ACM: San Francisco, CA, USA, 2016; pp. 1135–1144. [Google Scholar]
- Asatiani, A.; Malo, P.; Nagbøl, P.R.; Penttinen, E.; Rinta-Kahila, T. Challenges of Explaining the Behavior of Black-Box AI Systems. MIS Q. Exec. 2020, 19, 259–278. [Google Scholar] [CrossRef]
- Hao, X.; Zhang, G.; Ma, S. Deep Learning. Int. J. Semant. Comput. 2016, 10, 417–439. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Janiesch, C.; Zschech, P.; Heinrich, K. Machine Learning and Deep Learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
- Lundberg, S.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. arXiv 2017. [Google Scholar] [CrossRef]
- Rai, A. Explainable AI: From Black Box to Glass Box. J. Acad. Mark. Sci. 2020, 48, 137–141. [Google Scholar] [CrossRef]
- Longo, L.; Brcic, M.; Cabitza, F.; Choi, J.; Confalonieri, R.; Ser, J.D.; Guidotti, R.; Hayashi, Y.; Herrera, F.; Holzinger, A.; et al. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions. Inf. Fusion 2024, 106, 102301. [Google Scholar] [CrossRef]
- Miller, T. Explanation in Artificial Intelligence: Insights from the Social Sciences. arXiv 2018. [Google Scholar] [CrossRef]
- Barrett, D.G.; Morcos, A.S.; Macke, J.H. Analyzing Biological and Artificial Neural Networks: Challenges with Opportunities for Synergy? Curr. Opin. Neurobiol. 2019, 55, 55–64. [Google Scholar] [CrossRef]
- Abusitta, A.; Li, M.Q.; Fung, B.C.M. Survey on Explainable AI: Techniques, Challenges and Open Issues. Expert Syst. Appl. 2024, 255, 124710. [Google Scholar] [CrossRef]
- Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
- Craver, C.F.; Darden, L. Mechanisms in Biology. Introduction. Stud. Hist. Philos. Sci. Part C Stud. Hist. Philos. Biol. Biomed. Sci. 2005, 36, 233–244. [Google Scholar] [CrossRef]
- Bechtel, W.; Abrahamsen, A. Explanation: A Mechanist Alternative. Stud. Hist. Philos. Sci. Part C Stud. Hist. Philos. Biol. Biomed. Sci. 2005, 36, 421–441. [Google Scholar] [CrossRef] [PubMed]
- Casadevall, A.; Fang, F.C. Mechanistic Science. Infect. Immun. 2009, 77, 3517–3519. [Google Scholar] [CrossRef]
- Nielsen, I.E.; Dera, D.; Rasool, G.; Ramachandran, R.P.; Bouaynaya, N.C. Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks. IEEE Signal Process. Mag. 2022, 39, 73–84. [Google Scholar] [CrossRef]
- Brauwers, G.; Frasincar, F. A General Survey on Attention Mechanisms in Deep Learning. IEEE Trans. Knowl. Data Eng. 2023, 35, 3279–3298. [Google Scholar] [CrossRef]
- El Houda Dehimi, N.; Tolba, Z. Attention Mechanisms in Deep Learning: Towards Explainable Artificial Intelligence. In Proceedings of the 2024 6th International Conference on Pattern Analysis and Intelligent Systems (PAIS), El Oued, Algeria, 24–25 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–7. [Google Scholar]
- Montavon, G.; Binder, A.; Lapuschkin, S.; Samek, W.; Müller, K.R. Layer-Wise Relevance Propagation: An Overview. In Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2019; pp. 193–209. ISBN 978-3-030-28953-9. [Google Scholar]
- Cuadra, C.; Wojnicz, W.; Kozinc, Z.; Latash, M.L. Perceptual and Motor Effects of Muscle Co-Activation in a Force Production Task. Neuroscience 2020, 437, 34–44. [Google Scholar] [CrossRef]
- Latash, M.L.; Singh, T. Neurophysiological Basis of Motor Control, 3rd ed.; Human Kinetics: Champaign, IL, USA, 2024; ISBN 978-1-7182-0953-4. [Google Scholar]
- Tanaka, H.; Miyakoshi, M.; Makeig, S. Dynamics of Directional Tuning and Reference Frames in Humans: A High-Density EEG Study. Sci. Rep. 2018, 8, 8205. [Google Scholar] [CrossRef]
- Churchland, M.M.; Cunningham, J.P.; Kaufman, M.T.; Foster, J.D.; Nuyujukian, P.; Ryu, S.I.; Shenoy, K.V. Neural Population Dynamics During Reaching. Nature 2012, 487, 51–56. [Google Scholar] [CrossRef]
- Latash, M.L.; Zatsiorsky, V.M.; Latash, M.L. Biomechanics and Motor Control: Defining Central Concepts; Elsevier/AP: London, UK; San Diego, CA, USA, 2016; ISBN 978-0-12-800384-8. [Google Scholar]
- Feldman, A.G. Origin and Advances of the Equilibrium-Point Hypothesis. In Progress in Motor Control: A Multidisciplinary Perspective; Sternad, D., Ed.; Springer: Boston, MA, USA, 2009; pp. 637–643. ISBN 978-0-387-77064-2. [Google Scholar]
- Gallego, J.A.; Perich, M.G.; Miller, L.E.; Solla, S.A. Neural Manifolds for the Control of Movement. Neuron 2017, 94, 978–984. [Google Scholar] [CrossRef]
- Merton, P.A. Speculations on the Servo-Control of Movement. In Ciba Foundation Symposium—The Spinal Cord; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 1953; pp. 247–260. ISBN 978-0-470-71882-7. [Google Scholar]
- Seidler, R.D.; Noll, D.C.; Thiers, G. Feedforward and Feedback Processes in Motor Control. NeuroImage 2004, 22, 1775–1783. [Google Scholar] [CrossRef]
- Malone, L.A.; Hill, N.M.; Tripp, H.; Zipunnikov, V.; Wolpert, D.M.; Bastian, A.J. The Control of Movement Gradually Transitions from Feedback Control to Feedforward Adaptation Throughout Childhood. npj Sci. Learn. 2025, 10, 13. [Google Scholar] [CrossRef]
- Favela, L.H.H. The Ecological Brain; Routledge: New York, NY, USA, 2024; ISBN 978-0-367-44471-6. [Google Scholar]
- Blau, J.J.C.; Wagman, J.B. Introduction to Ecological Psychology: A Lawful Approach to Perceiving, Acting, and Cognizing, 1st ed.; Routledge: New York, NY, USA, 2022; ISBN 978-1-003-14569-1. [Google Scholar]
- Gibson, J.J. The Ecological Approach to Visual Perception; Psychology Press: New York, NY, USA, 1979. [Google Scholar]
- Alexander, G.E.; Crutcher, M.D. Neural Representations of the Target (Goal) of Visually Guided Arm Movements in Three Motor Areas of the Monkey. J. Neurophysiol. 1990, 64, 164–178. [Google Scholar] [CrossRef]
- Alexander, G.E.; Crutcher, M.D. Preparation for Movement: Neural Representations of Intended Direction in Three Motor Areas of the Monkey. J. Neurophysiol. 1990, 64, 133–150. [Google Scholar] [CrossRef]
- Georgopoulos, A.; Kalaska, J.; Caminiti, R.; Massey, J. On the Relations Between the Direction of Two-Dimensional Arm Movements and Cell Discharge in Primate Motor Cortex. J. Neurosci. 1982, 2, 1527–1537. [Google Scholar] [CrossRef]
- Michaels, J.A.; Dann, B.; Scherberger, H. Neural Population Dynamics During Reaching Are Better Explained by a Dynamical System than Representational Tuning. PLoS Comput. Biol. 2016, 12, e1005175. [Google Scholar] [CrossRef]
- Willett, F.R.; Avansino, D.T.; Hochberg, L.R.; Henderson, J.M.; Shenoy, K.V. High-Performance Brain-to-Text Communication via Handwriting. Nature 2021, 593, 249–254. [Google Scholar] [CrossRef]
- Yamins, D.L.K.; DiCarlo, J.J. Using Goal-Driven Deep Learning Models to Understand Sensory Cortex. Nat. Neurosci. 2016, 19, 356–365. [Google Scholar] [CrossRef]
- Kriegeskorte, N. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Annu. Rev. Vis. Sci. 2015, 1, 417–446. [Google Scholar] [CrossRef]
- Sussillo, D.; Churchland, M.M.; Kaufman, M.T.; Shenoy, K.V. A Neural Network That Finds a Naturalistic Solution for the Production of Muscle Activity. Nat. Neurosci. 2015, 18, 1025–1033. [Google Scholar] [CrossRef]
- Pandarinath, C.; O’Shea, D.J.; Collins, J.; Jozefowicz, R.; Stavisky, S.D.; Kao, J.C.; Trautmann, E.M.; Kaufman, M.T.; Ryu, S.I.; Hochberg, L.R.; et al. Inferring Single-Trial Neural Population Dynamics Using Sequential Auto-Encoders. Nat. Methods 2018, 15, 805–815. [Google Scholar] [CrossRef] [PubMed]
- Marblestone, A.H.; Wayne, G.; Kording, K.P. Toward an Integration of Deep Learning and Neuroscience. Front. Comput. Neurosci. 2016, 10, 94. [Google Scholar] [CrossRef] [PubMed]
- Woodward, J. Making Things Happen: A Theory of Causal Explanation; Oxford University Press: Oxford, UK, 2004; ISBN 978-0-19-515527-3. [Google Scholar]
- Lundy-Ekman, L. Neuroscience: Fundamentals for Rehabilitation, 4th ed.; Elsevier Inc.: Amsterdam, The Netherlands, 2013; ISBN 978-1-4557-0643-3. [Google Scholar]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
- Heo, J.; Yoon, J.G.; Park, H.; Kim, Y.D.; Nam, H.S.; Heo, J.H. Machine Learning–Based Model for Prediction of Outcomes in Acute Stroke. Stroke 2019, 50, 1263–1265. [Google Scholar] [CrossRef]
- Scott, S.H. Optimal Feedback Control and the Neural Basis of Volitional Motor Control. Nat. Rev. Neurosci. 2004, 5, 532–545. [Google Scholar] [CrossRef]
- Todorov, E.; Jordan, M.I. Optimal Feedback Control as a Theory of Motor Coordination. Nat. Neurosci. 2002, 5, 1226–1235. [Google Scholar] [CrossRef]
- Kawato, M. Internal Models for Motor Control and Trajectory Planning. Curr. Opin. Neurobiol. 1999, 9, 718–727. [Google Scholar] [CrossRef]
- De Regt, H.W.; Dieks, D. A Contextual Approach to Scientific Understanding. Synthese 2005, 144, 137–170. [Google Scholar] [CrossRef]
- Tavanaei, A.; Ghodrati, M.; Kheradpisheh, S.R.; Masquelier, T.; Maida, A. Deep Learning in Spiking Neural Networks. Neural Netw. 2019, 111, 47–63. [Google Scholar] [CrossRef] [PubMed]
- Konishi, M.; Igarashi, K.M.; Miura, K. Biologically Plausible Local Synaptic Learning Rules Robustly Implement Deep Supervised Learning. Front. Neurosci. 2023, 17, 1160899. [Google Scholar] [CrossRef]
- Aliferis, C.; Simon, G. Overfitting, Underfitting and General Model Overconfidence and Under-Performance Pitfalls and Best Practices in Machine Learning and AI. In Artificial Intelligence and Machine Learning in Health Care and Medical Sciences: Best Practices and Pitfalls; Simon, G.J., Aliferis, C., Eds.; Springer International Publishing: Cham, Switzerland, 2024; pp. 477–524. ISBN 978-3-031-39355-6. [Google Scholar]
- Shao, F.; Shen, Z. How Can Artificial Neural Networks Approximate the Brain? Front. Psychol. 2023, 13, 970214. [Google Scholar] [CrossRef] [PubMed]
- Sinz, F.H.; Pitkow, X.; Reimer, J.; Bethge, M.; Tolias, A.S. Engineering a Less Artificial Intelligence. Neuron 2019, 103, 967–979. [Google Scholar] [CrossRef]
- Placani, A. Anthropomorphism in AI: Hype and Fallacy. AI Ethics 2024, 4, 691–698. [Google Scholar] [CrossRef]
- Barrow, N. Anthropomorphism and AI Hype. AI Ethics 2024, 4, 707–711. [Google Scholar] [CrossRef]
- Abdollahi, M.; Gasparyan, A.Y.; Saeidnia, S. The Urge to Publish More and Its Consequences. DARU J. Pharm. Sci. 2014, 22, 53. [Google Scholar] [CrossRef]
- Singhal, S.; Kalra, B.S. Publication Ethics: Role and Responsibility of Authors. Indian J. Gastroenterol. 2021, 40, 65–71. [Google Scholar] [CrossRef]
- Wang, X.-J.; Hu, H.; Huang, C.; Kennedy, H.; Li, C.T.; Logothetis, N.; Lu, Z.-L.; Luo, Q.; Poo, M.; Tsao, D.; et al. Computational Neuroscience: A Frontier of the 21st Century. Natl. Sci. Rev. 2020, 7, 1418–1422. [Google Scholar] [CrossRef]
Framework | Core Assumptions | Key Predictions/Empirical Signatures | Decisive Paradigms | Can DL Test This? (How/Limits) |
---|---|---|---|---|
Optimal Feedback Control (OFC) | Goal-directed actions optimized by internal cost functions and state estimation. | Minimal Intervention Principle (MIP); variability aligned with task-irrelevant dimensions; rapid, context-dependent feedback gains. | Mechanical/visual perturbations; obstacle/anisotropy tasks; variance–covariance alignment. | Partly. DL might provide insight over motor output and outcomes but does not allow to infer directly over the relevant cost function being minimized. |
Internal Models (Forward/Inverse) | Predictive mappings. Sensorimotor loop of motor and sensory outcomes dependent on inverse modeling of motor commands. | After-effects (e.g., prism/force-field washout); context-specific generalization; sensory attenuation. | Prism/curl-field adaptation; visuomotor rotations; context-switching. | Partly. DL can capture adaptation curves. Mechanistic inference over command architecture requires prediction-error variables or forward-model priors. |
Dynamical Systems | Movement emerges from low-dimensional neural dynamics/manifolds; preparatory activity sets initial conditions of motor output. | Rotational dynamics; low-dimensional manifolds; preparatory-to-movement transitions; robustness to perturbations; often diffuse encoding of motor variables. | Neural population recordings; transient cortical perturbations; dynamical fits vs. tuning models. | Partly. DL latent-state models (RNNs, sequential auto-encoders) can recover trajectories and map latent variables. Insight over biological plausibility remains limited. |
Equilibrium-Point (λ-model) | CNS specifies the threshold of toni-stretch reflex-based parameter lambda (λ) | EMG patterns from referent shifts; unloading effects; posture-to-movement continuity. | Unloading/force perturbations; tonic stretch reflex manipulations. | No. The core parameter and assumption of this model remains theoretical and experimentally undefined beyond logic and interpretation. |
Ecological/Perception–Action | Control grounded in affordances; tight perception–action coupling. | No central executive of movement organization; direct perception; self-organization; affordance-based organization. | Constraints-led approach; affordance-based motor outcomes; | No. Sensory and motor activity may be traced but core assumptions remain untestable beyond fragile inference of outcomes. |
Criterion | Question | Scoring (0–2)/Notes |
---|---|---|
Task type (categorical) | Is the study aimed at engineering-style prediction/decoding, mechanistic inference, or hybrid use? | Classification only (no score) |
Mechanistic commitments | Are mechanistic claims explicit, tied to identifiable entities/activities and to a theoretical framework (e.g., optimal feedback control, internal models, dynamical systems, equilibrium-point, ecological)? | 0 = None/implicit 1 = Partial/indirect 2 = Explicit & testable |
Causal testability | Does the study include interventions, counterfactual predictions, or ablation tests that could falsify mechanistic claims? | 0 = No 1 = Limited/indirect 2 = Direct & rigorous |
Biophysical plausibility | Do model states/parameters map plausibly onto known neurophysiology/biomechanics (population dynamics, spinal circuits, local learning rules)? | 0 = None 1 = Coarse analogy 2 = Substantive mapping |
Generalization tests | Does the model generalize beyond training data (new tasks, effectors, datasets, labs)? | 0 = In-distribution only 1 = Limited 2 = Multiple rigorous tests |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dias, N.; Pinho, L.; Silva, S.; Freitas, M.; Figueira, V.; Pinho, F. The Black Box Paradox: AI Models and the Epistemological Crisis in Motor Control Research. Information 2025, 16, 823. https://doi.org/10.3390/info16100823
Dias N, Pinho L, Silva S, Freitas M, Figueira V, Pinho F. The Black Box Paradox: AI Models and the Epistemological Crisis in Motor Control Research. Information. 2025; 16(10):823. https://doi.org/10.3390/info16100823
Chicago/Turabian StyleDias, Nuno, Liliana Pinho, Sandra Silva, Marta Freitas, Vânia Figueira, and Francisco Pinho. 2025. "The Black Box Paradox: AI Models and the Epistemological Crisis in Motor Control Research" Information 16, no. 10: 823. https://doi.org/10.3390/info16100823
APA StyleDias, N., Pinho, L., Silva, S., Freitas, M., Figueira, V., & Pinho, F. (2025). The Black Box Paradox: AI Models and the Epistemological Crisis in Motor Control Research. Information, 16(10), 823. https://doi.org/10.3390/info16100823