Next Article in Journal
Three-Dimensional Culture Systems in Neuroblastoma Research
Previous Article in Journal
In Silico Simulation of Porous Geometry-Guided Diffusion for Drug-Coated Tissue Engineering Scaffold Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Opinion

Assessing the Utility of Organoid Intelligence: Scientific and Ethical Perspectives

by
Michael W. Nestor
1,* and
Richard L. Wilson
2
1
Autica Bio, 801 W Baltimore St., Baltimore, MD 21201, USA
2
Department of Philosophy & Religious Studies, Towson University, 8000 York Road, Towson, MD 21252, USA
*
Author to whom correspondence should be addressed.
Organoids 2025, 4(2), 9; https://doi.org/10.3390/organoids4020009
Submission received: 1 March 2025 / Revised: 24 April 2025 / Accepted: 27 April 2025 / Published: 1 May 2025

Abstract

:
The development of brain organoids from human-induced pluripotent stem cells (iPSCs) has expanded research into neurodevelopment, disease modeling, and drug testing. More recently, the concept of organoid intelligence (OI) has emerged, proposing that these constructs could evolve to support learning, memory, or even sentience. While this perspective has driven enthusiasm in the field of organoid research and suggested new applications in fields such as neuromorphic computing, it also introduces significant scientific and conceptual concerns. Current brain organoids lack the anatomical complexity, network organization, and sensorimotor integration necessary for intelligence or sentience. Despite this, claims surrounding OI often rely on oversimplified interpretations of neural activity, fueled by neurorealist and reification biases that misattribute neurophysiological properties to biologically limited systems. Beyond scientific limitations, the framing of OI risks imposing ethical and regulatory challenges based on speculative concerns rather than empirical evidence. The assumption that organoids might possess sentience, or could develop it over time, could lead to unnecessary restrictions on legitimate research while misrepresenting their actual capabilities. Additionally, comparing biological systems to silicon-based computing overlooks fundamental differences in scalability, efficiency, and predictability, raising questions about whether organoids can meaningfully contribute to computational advancements. The field must recognize the limitations of these models rather than prematurely defining OI as a distinct research domain. A more cautious, evidence-driven approach is necessary to ensure that brain organoids remain valuable tools for neuroscience without overstating their potential. To maintain scientific credibility and public trust, it is essential to separate speculative narratives from grounded research, thus allowing for continued progress in organoid studies without reinforcing misconceptions about intelligence or sentience.

1. Introduction

Since the advent of human-induced pluripotent stem cells (iPSCs) by Takahashi and Yamanaka (2006), the field of organoid research has rapidly evolved [1]. In 2013, Lancaster et al. (2013) demonstrated the first cerebral organoids capable of mimicking aspects of human brain development [2]. Subsequent studies expanded this work to include cortical layering and dynamic network activity [3], complex oscillatory behaviors [4], and disease modeling using patient-derived cells [5]. Fitzgerald et al. (2024) introduced semi-guided cortical organoids with more structured oscillatory profiles, advancing reproducibility in brain-like electrical activity [6]. Complementing this, Pașca et al. (2022) led a cross-institutional effort to define a nomenclature and framework for nervous system organoids and assembloids, underscoring the growing need for coordinated standards in this rapidly evolving field [7]. Most recently, Maisumu et al. (2025) outlined progress toward developing more complex neural circuitry in brain organoids and identified several technical challenges that remain to be addressed as the field advances [8].
The cultivation of human-induced pluripotent stem cells (iPSCs) into brain organoids has facilitated a wide range of research, from modeling neurodevelopmental disorders to testing pharmacological agents [9,10,11,12,13,14]. Recently, the concept of organoid intelligence (OI) has entered scientific discourse, exploring whether these in vitro constructs can approximate learning, memory, sentience, or other computational behaviors [15,16]. Proponents of OI, including Kagan et al. (2022), Smirnova et al. (2023), and Puppo and Muotri (2023), have posited that neural organoids or synthetic neuronal systems could demonstrate learning, adapt to environmental stimuli, and one day serve as bio-computational platforms [15,16,17]. These claims have energized discourse, particularly around their potential role in AI development and ethical oversight.
While the current limitations of brain organoids are substantial, future advancements in bioengineering may partially overcome these barriers. For example, assembloid models are beginning to incorporate multiple brain regions and partial sensory integration [18]. Advances in vascularization [19], closed-loop stimulation, and interface technologies may also improve the viability of organoid-based neural computation systems. Nevertheless, these developments remain in early stages, and caution is warranted in labeling such constructs as “intelligent” based on preliminary results.
Thus, such aspirations invite scrutiny regarding the ethical and conceptual soundness of attributing intelligence or even incipient sentience, let alone consciousness, to organoids [20]. Sentience, in this case, is considered simply as meaningful responses to stimuli and only the biological substrates needed for the response. This is taken by most OI researchers as only a potential first step to consciousness but not consciousness in itself, although some believe that a stronger form of sentience that predetermines consciousness is needed to adequately model neurodevelopmental and neurodegenerative disorders, and others argue that a stronger form of sentience, with defined sensory and feedback mechanisms, is needed to meaningfully simulate cognitive or neurodegenerative conditions. However, empirical validation of such claims remains sparse. [21]. In particular, media reports and academic discussions sometimes invoke sensational language, casting organoids as “mini-brains” with capabilities approaching those of genuine nervous systems [22]. This narrative frequently overlooks the limited anatomical, synaptic, and functional sophistication of these constructs as compared to intact brains [23]. Moreover, an indiscriminate application of the precautionary principle could stifle research that remains valuable for understanding early brain development or pathological states [24,25]. This commentary aims to spark conversation within the brain organoid community around the concept of OI to assess ongoing debates about potential sentience or intelligence in these systems and suggest a more restrained and careful perspective, including removing the term “OI” as nomenclature for the research activities clustered around brain organoids.
Brain organoids arise from iPSCs, or embryonic stem cells exposed to conditions that promote neuroectoderm differentiation [3]. These structures are cultivated from iPSCs and can mimic various aspects of brain development and function. They have been instrumental in modeling neurological disorders, studying brain development, and exploring neuroplasticity [26]. Incorporating an OI approach into traditional developmental neurotoxicity assays has been proposed to enhance the assessment of chemical impacts on neural plasticity, a critical aspect not adequately addressed by current in vitro assays [16]. Recent research has demonstrated that isolated brain cells can “learn” and perform tasks through stimulus-driven training techniques inspired by machine learning. This suggests potential applications for OI in biocomputing, where organoids comprised of such networks could be trained to process information in ways analogous to biological neural networks. For instance, researchers at Cortical Labs demonstrated that human and mouse primary cortical neurons, when supplied with sufficient stimuli, could “play” the game Pong through an adaptive interface. Their experiment illustrated the capacity to process information and adapt responses based on the feedback of exogenous cortical networks, as well as sentience, a fundamental characteristic of intelligence [15].
The published paper that contained their experiment resulted in a swift response from the brain organoid community, in which the community rebuked the notion that simplistic models like single layers of in vitro neurons could “self-organize activity to display intelligent and sentient behavior when embodied in a simulated game-world” [27]. Indeed, brain organoids are only a few orders of magnitude more complex than single-layer neurons, and although they display more complex activity, as we and others have argued in other venues, brain organoids are still early in development, and concepts of sentience and “intelligence” are premature [28].
Even if we accept the possibility of sentience in brain organoids, as proposed by the proponents of the nomenclature of OI, it remains crucial to evaluate whether these structures possess the necessary conditions to support intelligence. If we grant that research in the field of OI is solely focused on developing “functional aspects related to learning, cognition, and computing” without directly addressing consciousness, these functions serve as precursors to both intelligence and consciousness. As a result, the OI framework will inevitably converge on debates about consciousness, regardless of the field’s current intent to circumvent the issue.
Yet, brain organoids, and even their modern cousins, assembloids [7], currently lack the cytoarchitecture to support OI. The human brain has ~86 billion neurons and a vast, evolutionarily developed architecture with cortical layers, thalamocortical loops, and long-range connectivity [23]. Organoids contain only ~100,000–200,000 neurons, lack structured connectivity, and do not engage in the sensory–motor interactions required for cognition [3]. Consciousness requires coordinated activity across wide neural networks with functional specialization and dynamic feedback loops [29]. The authors of the OI program may argue that, over time, brain organoids will gain the complexity to approach systems that highly mimic the brain, even if the current complexity is low. This perspective may be overly optimistic, as even the most advanced protocols for generating brain organoids and assembloids (multiregional constructs created by fusing distinct region-specific organoids) still result in inherently stochastic and disorganized cellular and network architecture, which lack the developmental cues and spatial patterning found in the in vivo human brain [30].
Thus, this manuscript aims to critically examine the conceptual, empirical, and ethical limitations associated with OI and to advocate for a more restrained, evidence-based framing of brain organoid capabilities and one that preserves stakeholder confidence and sustains public and scientific support for brain organoid research at this early and formative stage of the field.

2. The Greely Dilemma and Premature Application of the Precautionary Principle

To address some of the ethical challenges associated with sentience in brain organoids, Greely et al. [31] suggested a version of Hilary Putnam’s “brain in a vat” argument, stating that, “If it looks like a human brain and acts like a human brain…to what point do we have to treat it like a human brain…” Thus, the precautionary principle has been suggested as a way to deal with challenges like the Greely Dilemma, essentially assuming that all organoids have sentience and, potentially, consciousness and shifting the debate to what kind of consciousness [20]. These types of arguments set the stage for a permissive environment around brain organoid sentience and consciousness speculation that led to the development of the OI framework. On the surface, it seems as though a simple application of the precautionary principle in this case—i.e., assuming that all brain organoids will have sentience and, therefore, consciousness—would be the best case to protect the field from any ethical or regulatory challenges to progress, but the use of the term “OI” may have the opposite effect.
Premature application of the precautionary principle risks misrepresenting the ethical stakes and could invite regulation based on speculative claims. While some ethicists, including Greely (2021), raise valid concerns about future sentience, asserting such properties prematurely may erode scientific credibility and misdirect public policy [32]. The result of the overestimation of the capabilities of brain organoids and, thus, the creation of the field of (or term) OI based on those hyped capabilities can lead to public distrust and an overly burdensome regulatory environment, inhibiting innovation and therapeutic advancements in brain organoid research. This has been seen before in the case of embryonic stem cell research and other research areas [22,33]. The term “brain-computer interface” has experienced hype cycles, with early speculative claims about mind-reading capabilities leading to unfounded societal concerns, including the codification of the term “neurorights” [34]. Neurorights, in turn, have been elevated to national policy and recently codified in the Chilean Constitution [19]. Yet, concepts like “neurofeedback” and “brain-computer interface” which lead to such legislative action suffer the same lack of empirical clarity as OI. This convergence may lead to the inhibition of innovation within neuroscience research in Chile due to hasty legislation based on ambiguity [19], illustrating the danger of the premature application of the precautionary principle that relies on empirically fuzzy data.

3. Ryle’s Category Mistake

Much of the scientific hype around the capabilities of brain organoids and, by extension, OI, particularly those that highlight the concept of sentience arising from the random neural circuits that form within them, is a form of “category mistake” or “category error,” described by Gilbert Ryle in The Concept of Mind [35]. He used this concept to critique Cartesian dualism, which, he argued, incorrectly treats the mind as a distinct, non-physical entity separate from the body. Ryle’s category mistake is both a semantic and ontological error that occurs when a concept or entity is misclassified as belonging to a different logical category than it actually does. Ryle illustrated this with the example of a visitor to a university who, after seeing various colleges, libraries, and lecture halls, asks, “But where is the university?” In this case, the observer makes the mistake not to realize that the university is not an additional physical entity but rather the sum of its parts.
The tendency to equate organized electrophysiological activity in organoids with sentience is a category mistake. Neural oscillations and plasticity in organoids do not inherently signal cognitive integration or subjective experience. Without structured input–output systems, these constructs lack essential features of cognition and should not be classified in the same domain as functioning brains. In this sense, OI is a category error that presumes that sentience in organoids, and intelligence after that, is simply achieved by setting the conditions in which cortical neurons can form networks and respond to stimuli in a way that demonstrates organized electrophysiological activity, such as electrophysiological and calcium oscillations, which have been observed in brain organoids by our group and others [5,6]. Organoids may show marginally more complex electrophysiological signatures than 2D cultures but do not surpass well-differentiated ex vivo rodent preparations in terms of functional connectivity or plasticity. These comparisons suggest that claims of OI’s near-term feasibility may be premature. In this way, the mistake puts brain organoids in the same category as intact human brains and presupposes some equivalent capabilities.

4. Eroom’s Law and Neuromorphic Computing in Brain Organoids

One of the central concepts driving the establishment of the term OI is a recognition of the increasing limitations of traditional, silicon-based computing systems. Moore’s Law, which predicted the doubling of transistor density approximately every two years, is approaching its physical limits [36]. As computational demands grow, particularly in AI applications, researchers are exploring biological systems’ efficiency and adaptability to create novel computing paradigms. The idea that brain organoids, once they reach a certain level of complexity, will follow a similar trajectory to silicon-based systems does not account for the unpredictable nature of biological systems and their inherent inefficiencies as model systems.
This can be best illustrated by Eroom’s Law. Eroom’s Law, described by Scannell et al. [37], highlights the paradoxical decline in pharmaceutical research and development (R&D) productivity despite advances in scientific knowledge and technology. Contrary to Moore’s Law, which predicts exponential improvement in computing power, Eroom’s Law observes that the inflation-adjusted cost of bringing a new drug to market has doubled approximately every nine years, leading to an 80-fold decrease in R&D efficiency since 1950. Some of the root causes of this decline are that new drugs must outperform highly effective existing treatments, which makes incremental improvements harder to demonstrate and increases safety and efficacy standards that lead to lengthier, costlier clinical trials.
One major challenge is the unpredictability of biological complexity, where drug candidates that demonstrate efficacy in controlled preclinical models often fail in human trials due to unforeseen interactions within the unique and dynamic physiology of the human. The reductionist approach of target-based drug discovery, which focuses on single molecular targets, has contributed to this inefficiency by neglecting the broader, interconnected nature of biological pathways, leading to poor translational success [38]. Moreover, biological variability among patient populations introduces additional hurdles, as even successful drugs may not perform uniformly across diverse genetic and environmental backgrounds. These inefficiencies highlight the non-linear nature of biological systems as models for computation. Neuromorphic systems must support a vast number of neurons and synapses to replicate the brain’s intricate networks. This requires hardware architectures that can scale efficiently while maintaining high interconnectivity [39]. Even though recent developments have demonstrated neuromorphic perception systems that mimic biological processes, enhancing the system’s ability to process sensory information in a manner that mimics the human brain [40], these processes rely on inherently non-linear and unpredictable systems that are subject to Eroom’s Law, which may undermine the idea that “OI” is an appropriate term to describe a field that hypothesizes OI outcomes that will match those of purely silicon-based systems.
As such, Eroom’s Law serves as a cautionary thought experiment, suggesting that even highly promising biological models often fail to scale or translate into predictable outcomes. The inherent stochasticity, variability, and inter-individual complexity of organoids suggest that claims about OI achieving parity with silicon systems either in learning or performance are premature.

5. OI Nomenclature May Rely on the Convergence of Reification and Neurorealism

Reification is the fallacy of treating abstract concepts as concrete entities [41]. In the case of OI, there is a risk of reifying neural activity observed in brain organoids as analogous to sentience. Although studies such as those by Trujillo et al. (2021) provide intriguing proof-of-concept demonstrations, they do not justify the reification of organoid activity as sentience [14]. Similarly, while neural organoids may generate spontaneous activity, interpreting this through a neurorealist lens inflates their capacity and risks distorting both scientific and ethical discourse.
While these structures exhibit spontaneous electrical signaling and, in some cases, synaptic plasticity [18], interpreting these phenomena as evidence of computational capability rather than emergent biological processes can lead to misconceptions about their potential for biocomputing. This mischaracterization may result in premature attempts to harness brain organoids for OI applications without addressing fundamental limitations, such as their instability, scalability, and variability. A cursory acknowledgment of the probabilistic and emergent nature of biological networks, rather than treating organoids as deterministic, scalable representations of the human brain, results in uncovering the reification bias inherent in OI. Brain organoids lack crucial aspects of real neural networks, including vascularization [42], long-range connectivity, and the full range of biochemical and electrical dynamics present in a living brain. As a result, their ability to generate scalable and consistent insights for neuropharmacology and, eventually, biologically based computing remains highly uncertain.
To illustrate this in another way, consider the aforementioned issues with drug discovery and the inefficiencies of biological systems in pharmaceutical R&D. Many of these approaches rely on the application of reductionist discovery paradigms. In modern drug development, molecular targets such as specific proteins or genetic pathways are often treated as fixed, isolated variables with predictable and scalable effects rather than as components of a highly dynamic and interconnected biological system [43]. This reification leads to an overemphasis on single-target strategies, assuming that screening a specific biomolecular pathway will reliably translate into clinical efficacy when, in reality, biological responses are emergent properties of complex, adaptive systems. As a result of the application of the abstract concept of reductionism, many promising compounds that perform well in reductionist preclinical models fail in human trials because those models reify biological mechanisms in an oversimplified manner, ignoring compensatory pathways, epigenetic regulation, and patient-specific variability [44].
Neurorealism is a cognitive bias where neuroscientific findings, especially in brain imaging, are perceived as more credible than other evidence [45,46]. This bias arises when complex brain functions are oversimplified, with colorful fMRI images creating misleading correlations between brain activity and subjective experiences [47]. This same bias can be applied to the proclamations made that OI should be a new term and field based on the potential capabilities of brain organoids. The neurorealist framing of brain organoids as sentient “thinking” or “learning” structures may lead to premature claims about their potential in artificial intelligence and neurophilosophy without sufficient empirical evidence to support such assertions. Addressing these issues requires a more nuanced understanding of the emergent and probabilistic nature of neural systems rather than treating organoids as deterministic and scalable analogs of the human brain. A critical approach that acknowledges both the promise and limitations of these models will be essential for advancing their application in neuroscience and biotechnology without falling into the pitfalls of neurorealist exaggeration.
The neurorealist bias emerges when the spontaneous electrical oscillations and synaptic plasticity in organoids are misinterpreted as evidence of rudimentary sentience rather than as emergent biological processes resulting from disorganized neural networks. This assumption is problematic because sentience, unlike mere neural activity, is thought to require an integrated system with mechanisms for processing sensory input, self-regulation, and interaction with an environment. These are features that brain organoids do not robustly display consistently. Their development occurs in artificial conditions without the bodily and sensory components that shape experience in living organisms, which makes any claim of sentience, at least in the context used in the emerging OI literature highly speculative.
As already mentioned, due to neurorealism, there is a growing discourse surrounding the potential ethical implications of “suffering” in organoids, despite no empirical evidence supporting their capacity for sentience as traditionally defined. If the public and scientific community accept neurorealist assumptions uncritically, it could lead to misplaced ethical debates and regulatory hurdles that divert attention from more immediate ethical concerns, such as the use of organoids in drug testing and their role in biomedical research with respect to frameworks like informed consent. A more rigorous and scientifically grounded approach is necessary to distinguish between neural activity and genuine sentience and, thus, ensure that the discourse around brain organoids remains focused on their actual capabilities rather than speculative or exaggerated claims, and as such, there is no need for a declaration of a field or a term like OI.

6. Conflicts of Interest and Financial Incentives

Researchers sometimes create new categories to establish niche areas and secure grants [48,49]. Though this may not be the case for the establishment of OI as a term and field of inquiry, the public may see this as the case. Developing new concepts like OI prematurely does create additional press coverage and public attention, which may spur additional investments or other financial incentives for those perceived to be participating in the initiation phase [22,33]. Companies and academic institutions that are considered to be involved in the genesis of OI may benefit from increased funding for novel research areas because of the marketing hype spurred by the new term “OI” [50]. The creation of terms without robust scientific backing can mislead public perception and policymakers who drive funding decisions, and such trends can result in wasted resources and misguided public policy. Indeed, the term “neuroplasticity” experienced a similar hype in the 1990s, which led to inflated claims about brain training programs [51].
Neuroplasticity in this context refers to the brain’s ability to reorganize and adapt its structure and function in response to experience or injury. While organoids can exhibit synaptic remodeling and activity-dependent changes, these processes occur in highly restricted contexts, lacking the environmental and systemic complexity required for functional adaptation seen in vivo. When new scientific terminology is introduced without a strong empirical foundation, it can erode public trust in science. In the case of OI, the term may lead to misinterpretations of neural activity in organoids and foster unrealistic expectations about their capabilities. When such expectations are amplified by increased media attention and funding, the inevitable gap between promise and performance can foster skepticism and disillusionment, ultimately undermining support for the field. This can be seen in how the public misunderstanding of mirror neurons fueled overstated claims about empathy and learning [52,53].

7. Conclusions

In light of scientific advancements regarding brain organoids and their potential classification under the term “OI”, it is clear that both conceptual and empirical challenges must be addressed before assigning such a designation, particularly regarding claims of sentience. The discourse surrounding OI is significantly influenced by neurorealism and reification, which risk distorting the scientific reality of what brain organoids can currently achieve.
The emergence of OI as a conceptual framework reflects both the excitement and hazards of working at the intersection of biology, computation, and ethics. While brain organoids hold tremendous promise for understanding development and disease, current data do not support their classification as intelligent or sentient entities. Premature adoption of the OI label may overstate their capabilities and introduce unnecessary ethical dilemmas, distracting from more immediate scientific priorities.
While organoids have demonstrated neural plasticity and some degree of stimulus–response adaptation, these behaviors do not equate to sentience, which involves the capacity for subjective experience and sensation. Sentience requires an integrated system with dynamic interactions between sensory input, motor output, and internal regulatory mechanisms. These are features absent or demonstrated with a high level of variability in brain organoids. Furthermore, the absence of neurovascular and neuroendocrine integration, both of which are essential for modulating neural signaling and establishing the physiological context for sentience, further underscores the limitations of brain organoids in modeling higher-order nervous system functions.
The application of the precautionary principle to assume that these constructs possess or will inevitably develop sentience not only misrepresents their capabilities but also risks imposing unnecessary ethical and regulatory barriers. Furthermore, the overly interpreted comparison of brain organoids as simplistic analogs for parts of human brains is without robust empirical support. This also exemplifies a category mistake that falsely assumes that neural activity alone is sufficient for intelligence or sentience. This misclassification could lead to public misconceptions and inflated expectations about organoid research, diverting resources from more pressing neuroscientific and biomedical challenges.
Given these concerns, it is prudent to reconsider the validity of OI as a distinct field, especially when its foundational claims rely on speculative assertions about sentience and the potential of brain organoids to produce computational models that approach silicon-based systems. The rapid establishment of new terminology, particularly in areas where empirical evidence is still emerging, risks fostering hype cycles that can ultimately erode public trust in science. Similar patterns have been observed in the premature framing of concepts like neuroplasticity and brain–computer interfaces, which have led to exaggerated claims and subsequent skepticism about scientific results.

Author Contributions

Conceptualization, M.W.N. and R.L.W.; writing—original draft preparation, M.W.N. and R.L.W.; writing—review and editing, M.W.N. and R.L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created.

Acknowledgments

During the preparation of this manuscript, the author(s) used ChatGPT4.0 for the purposes of revising manuscript language or providing more concise sentence construction where needed. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest; however, Michael W. Nestor is Chief Scientific Officer of Autica Bio, a company engaged in brain organoid research.

Abbreviations

The following abbreviations are used in this manuscript:
OIOrganoid Intelligence
iPSCHuman-Induced Pluripotent Stem Cell

References

  1. Takahashi, K.; Yamanaka, S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell 2006, 126, 663–676. [Google Scholar] [CrossRef] [PubMed]
  2. Lancaster, M.A.; Renner, M.; Martin, C.A.; Wenzel, D.; Bicknell, L.S.; Hurles, M.E.; Homfray, T.; Penninger, J.M.; Jackson, A.P.; Knoblich, J.A. Cerebral organoids model human brain development and microcephaly. Nature 2013, 501, 373–379. [Google Scholar] [CrossRef] [PubMed]
  3. Quadrato, G.; Nguyen, T.; Macosko, E.Z.; Sherwood, J.L.; Min Yang, S.; Berger, D.R.; Maria, N.; Scholvin, J.; Goldman, M.; Kinney, J.P.; et al. Cell diversity and network dynamics in photosensitive human brain organoids. Nature 2017, 545, 48–53. [Google Scholar] [CrossRef]
  4. Trujillo, C.A.; Gao, R.; Negraes, P.D.; Gu, J.; Buchanan, J.; Preissl, S.; Wang, A.; Wu, W.; Haddad, G.G.; Chaim, I.A.; et al. Complex oscillatory waves emerging from cortical organoids model early human brain network development. Cell Stem Cell 2019, 25, 558–569.e7. [Google Scholar] [CrossRef]
  5. Durens, M.; Nestor, J.; Williams, M.; Herold, K.; Niescier, R.F.; Lunden, J.W.; Phillips, A.W.; Lin, Y.-C.; Dykxhoorn, D.M.; Nestor, M.W. High-throughput screening of human induced pluripotent stem cell-derived brain organoids. J. Neurosci. Methods 2020, 335, 108627. [Google Scholar] [CrossRef]
  6. Fitzgerald, M.Q.; Chu, T.; Puppo, F.; Blanch, R.; Chillón, M.; Subramaniam, S.; Muotri, A.R. Generation of ‘semi-guided’ cortical organoids with complex neural oscillations. Nat. Protoc. 2024, 19, 2712–2738. [Google Scholar] [CrossRef]
  7. Pașca, S.P.; Arlotta, P.; Bateup, H.S.; Camp, J.G.; Cappello, S.; Gage, F.H.; Knoblich, J.A.; Kriegstein, A.R.; Lancaster, M.A.; Ming, G.L.; et al. A nomenclature consensus for nervous system organoids and assembloids. Nature 2022, 609, 907–910. [Google Scholar] [CrossRef]
  8. Maisumu, G.; Willerth, S.; Nestor, M.; Waldau, B.; Schülke, S.; Nardi, F.V.; Ahmed, O.; Zhou, Y.; Durens, M.; Liang, B.; et al. Brain organoids: Building higher-order complexity and neural circuitry models. Trends Biotechnol. 2025, 43, 456–468. [Google Scholar] [CrossRef]
  9. Costamagna, G.; Comi, G.P.; Corti, S. Advancing drug discovery for neurological disorders using iPSC-derived neural organoids. Int. J. Mol. Sci. 2021, 22, 2659. [Google Scholar] [CrossRef] [PubMed]
  10. Dionne, O.; Sabatié, S.; Laurent, B. Deciphering the physiopathology of neurodevelopmental disorders using brain organoids. Brain 2025, 148, 12–26. [Google Scholar] [CrossRef]
  11. Domene Rubio, A.; Hamilton, L. A comprehensive review on utilizing human brain organoids to study neuroinflammation in neurological disorders. J. Neuroimmune Pharmacol. 2025, 20, 23. [Google Scholar] [CrossRef]
  12. Giorgi, C.; Lombardozzi, G.; Ammannito, F.; Scenna, M.S.; Maceroni, E.; Quintiliani, M.; d’Angelo, M.; Cimini, A.; Castelli, V. Brain organoids: A game-changer for drug testing. Pharmaceutics 2024, 16, 443. [Google Scholar] [CrossRef] [PubMed]
  13. Aili, Y.; Maimaitiming, N.; Wang, Z.; Wang, Y. Brain organoids: A new tool for modelling of neurodevelopmental disorders. J. Cell. Mol. Med. 2024, 28, e18560. [Google Scholar] [CrossRef] [PubMed]
  14. Trujillo, C.A.; Adams, J.W.; Negraes, P.D.; Carromeu, C.; Tejwani, L.; Acab, A.; Tsuda, B.; Thomas, C.A.; Sodhi, N.; Fichter, K.M.; et al. Pharmacological reversal of synaptic and network pathology in human MECP2-knockout neurons and cortical organoids. EMBO Mol. Med. 2021, 13, e12523. [Google Scholar] [CrossRef]
  15. Kagan, B.J.; Kitchen, A.C.; Tran, N.T.; Parker, B.J.; Bhat, A.; Bye, J.O.; Liddicoat, A.M.; Hernández, D.M.; Crouch, S.H.; Kearney, C.J.; et al. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 2022, 110, 4044–4059.e11. [Google Scholar] [CrossRef]
  16. Smirnova, L.; Caffo, B.S.; Gracias, D.H.; Huang, Q.; Morales Pantoja, I.E.; Tang, B.; Zack, D.J.; Berlinicke, C.A.; Boyd, J.L.; Harris, T.D.; et al. Organoid intelligence (OI): The new frontier in biocomputing and intelligence-in-a-dish. Front. Sci. 2023, 1, 1017235. [Google Scholar] [CrossRef]
  17. Puppo, F.; Muotri, A.R. Network and microcircuitry development in human brain organoids. Biol. Psychiatry 2023, 93, 590–593. [Google Scholar] [CrossRef]
  18. Patton, M.H.; Thomas, K.T.; Bayazitov, I.T.; Newman, K.D.; Kurtz, N.B.; Robinson, C.G.; Ramirez, C.A.; Trevisan, A.J.; Bikoff, J.B.; Peters, S.T.; et al. Synaptic plasticity in human thalamocortical assembloids. Cell Rep. 2024, 43, 114503. [Google Scholar] [CrossRef]
  19. Gilbert, F.; Russo, I. Neurorights: The land of speculative ethics and alarming claims? AJOB Neurosci. 2024, 15, 113–115. [Google Scholar] [CrossRef]
  20. Niikawa, T.; Hayashi, Y.; Shepherd, J.; Sawai, T. Human brain organoids and consciousness. Neuroethics 2022, 15, 5. [Google Scholar] [CrossRef]
  21. Pereira, A.; Garcia, J.W., Jr.; Muotri, A. Neural stimulation of brain organoids with dynamic patterns: A sentiomics approach directed to regenerative neuromedicine. NeuroSci 2023, 4, 31–42. [Google Scholar] [CrossRef]
  22. Bassil, K. The end of ‘mini-brains’! Responsible communication of brain organoid research. Mol. Psychol. Brain Behav. Soc. 2023, 2, 13. [Google Scholar] [CrossRef]
  23. Herculano-Houzel, S. The human brain in numbers: A linearly scaled-up primate brain. Front. Hum. Neurosci. 2009, 3, 31. [Google Scholar] [CrossRef] [PubMed]
  24. Hyun, I.; Scharf-Deering, J.C.; Lunshof, J.E. Ethical issues related to brain organoid research. Brain Res. 2020, 1732, 146653. [Google Scholar] [CrossRef]
  25. Ludlow, K.; Smyth, S.J.; Falck-Zepeda, J.B. Risk appropriate, science-based innovation regulations are important. Trends Biotechnol. 2025, 43, 502–510. [Google Scholar] [CrossRef]
  26. Dixon, T.A.; Muotri, A.R. Advancing preclinical models of psychiatric disorders with human brain organoid cultures. Molecular 2023, 28, 83–95. [Google Scholar] [CrossRef]
  27. Balci, F.; Ben Hamed, S.; Boraud, T.; Bouret, S.; Brochier, T.; Brun, C.; Cohen, J.Y.; Coutureau, E.; Deffains, M.; Doyère, V.; et al. A response to claims of emergent intelligence and sentience in a dish. Neuron 2023, 111, 604–605. [Google Scholar] [CrossRef]
  28. Weintraub, K. “Mini Brains” Are Not Like the Real Thing. Scientific American. 30 January 2020. Available online: https://www.scientificamerican.com/article/mini-brains-are-not-like-the-real-thing/ (accessed on 28 February 2025).
  29. Tononi, G.; Boly, M.; Massimini, M.; Koch, C. Integrated information theory: From consciousness to its physical substrate. Nat. Rev. Neurosci. 2016, 17, 450–461. [Google Scholar] [CrossRef]
  30. Lunden, J.W.; Durens, M.; Nestor, J.; Niescier, R.F.; Herold, K.; Brandenburg, C.; Lin, Y.-C.; Blatt, G.J.; Nestor, M.W. Development of a 3-D organoid system using human induced pluripotent stem cells to model idiopathic autism. Adv. Neurobiol. 2020, 25, 259–297. [Google Scholar] [CrossRef]
  31. Greely, H.T. Human brain surrogates research: The onrushing ethical dilemma. Am. J. Bioeth. 2021, 21, 34–45. [Google Scholar] [CrossRef]
  32. Lavazza, A.; Chinaia, A.A. Human brain organoids and their ethical issues: Navigating the moral and social challenges between hype and underestimation. EMBO Rep. 2024, 25, 13–16. [Google Scholar] [CrossRef] [PubMed]
  33. Acosta, N.D.; Golub, S.H. The new federalism: State policies regarding embryonic stem cell research. J. Law Med. Ethics 2016, 44, 419–436. [Google Scholar] [CrossRef]
  34. Ruiz, S.; Valera, L.; Ramos, P.; Sitaram, R. Neurorights in the Constitution: From neurotechnology to ethics and politics. Philos. Trans. R. Soc. B Biol. Sci. 2024, 379, 20230098. [Google Scholar] [CrossRef]
  35. Ryle, G. The Concept of Mind: 60th Anniversary Edition, 1st ed.; Routledge: London, UK, 2009. [Google Scholar] [CrossRef]
  36. Kumar, S. Fundamental limits to Moore’s law. arXiv 2015, arXiv:1511.05956. [Google Scholar]
  37. Scannell, J.W.; Blanckley, A.; Boldon, H.; Warrington, B. Diagnosing the decline in pharmaceutical R&D efficiency. Nat. Rev. Drug Discov. 2012, 11, 191–200. [Google Scholar] [CrossRef]
  38. Duval, M.X. The inadequacy of the reductionist approach in discovering new therapeutic agents against complex diseases. Exp. Biol. Med. 2018, 243, 1004–1013. [Google Scholar] [CrossRef]
  39. Date, P.; Kay, B.; Schuman, C.D.; Patton, R.; Potok, T. Computational complexity of neuromorphic algorithms. In Proceedings of the International Conference on Neuromorphic Systems, Knoxville, TN, USA, 27–29 July 2021; pp. 1–7. [Google Scholar] [CrossRef]
  40. Yao, Y.; Pankow, R.M.; Huang, W.; Wu, C.; Gao, L.; Cho, Y.; Chen, J.; Zhang, D.; Sharma, S.; Liu, X.; et al. An organic electrochemical neuron for a neuromorphic perception system. Proc. Natl. Acad. Sci. USA 2025, 122, e2414879122. [Google Scholar] [CrossRef]
  41. Te Meerman, S.; Freedman, J.E.; Batstra, L. ADHD and reification: Four ways a psychiatric construct is portrayed as a disease. Front. Psychiatry 2022, 13, 1055328. [Google Scholar] [CrossRef]
  42. Qian, X.; Song, H.; Ming, G. Brain organoids: Advances, applications and challenges. Development 2019, 146, dev166074. [Google Scholar] [CrossRef]
  43. Barabási, A.-L.; Gulbahce, N.; Loscalzo, J. Network medicine: A network-based approach to human disease. Nat. Rev. Genet. 2011, 12, 56–68. [Google Scholar] [CrossRef]
  44. Sun, D.; Gao, W.; Hu, H.; Zhou, S. Why 90% of clinical drug development fails and how to improve it? Acta Pharm. Sin. B 2022, 12, 3049–3062. [Google Scholar] [CrossRef]
  45. Racine, E.; Bar-Ilan, O.; Illes, J. fMRI in the public eye. Nat. Rev. Neurosci. 2005, 6, 159–164. [Google Scholar] [CrossRef] [PubMed]
  46. Popescu, M.; Thompson, R.B.; Gayton, W.; Markowski, V. A reexamination of the neurorealism effect: The role of context. J. Sci. Commun. 2016, 15, A01. [Google Scholar] [CrossRef]
  47. Weisberg, D.S.; Keil, F.C.; Goodstein, J.; Rawson, E.; Gray, J.R. The seductive allure of neuroscience explanations. J. Cogn. Neurosci. 2008, 20, 470–477. [Google Scholar] [CrossRef] [PubMed]
  48. Qiu, H.S.; Peng, H.; Fosse, H.B.; Woodruff, T.K.; Uzzi, B. Use of promotional language in grant applications and grant success. JAMA Netw. Open 2024, 7, e2448696. [Google Scholar] [CrossRef]
  49. Redford, K.H.; Padoch, C.; Sunderland, T. Fads, funding, and forgetting in three decades of conservation. Conserv. Biol. 2013, 27, 437–438. [Google Scholar] [CrossRef]
  50. Aral, S. The Hype Machine: How Social Media Disrupts our Elections, Our Economy, and Our Health—And How We Must Adapt; Crown Currency: Sydney, Australia, 2020. [Google Scholar]
  51. Simons, D.J.; Boot, W.R.; Charness, N.; Gathercole, S.E.; Chabris, C.F.; Hambrick, D.Z.; Stine-Morrow, E.A.L. Do “brain-training” programs work? Psychol. Sci. Public Interest 2016, 17, 103–186. [Google Scholar] [CrossRef]
  52. Heyes, C.; Catmur, C. What happened to mirror neurons? Perspect. Psychol. Sci. 2021, 16, 917–931. [Google Scholar] [CrossRef]
  53. Willcoxon, M. Overexposure Distorted the Science of Mirror Neurons. Quanta Magazine, 2 April 2024. Available online: https://www.quantamagazine.org/overexposure-distorted-the-science-of-mirror-neurons-20240402/ (accessed on 28 February 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nestor, M.W.; Wilson, R.L. Assessing the Utility of Organoid Intelligence: Scientific and Ethical Perspectives. Organoids 2025, 4, 9. https://doi.org/10.3390/organoids4020009

AMA Style

Nestor MW, Wilson RL. Assessing the Utility of Organoid Intelligence: Scientific and Ethical Perspectives. Organoids. 2025; 4(2):9. https://doi.org/10.3390/organoids4020009

Chicago/Turabian Style

Nestor, Michael W., and Richard L. Wilson. 2025. "Assessing the Utility of Organoid Intelligence: Scientific and Ethical Perspectives" Organoids 4, no. 2: 9. https://doi.org/10.3390/organoids4020009

APA Style

Nestor, M. W., & Wilson, R. L. (2025). Assessing the Utility of Organoid Intelligence: Scientific and Ethical Perspectives. Organoids, 4(2), 9. https://doi.org/10.3390/organoids4020009

Article Metrics

Back to TopTop