Next Article in Journal
Cover Thickness Prediction for Steel Inside Concrete by Sub-Terahertz Wave Using Deep Learning
Previous Article in Journal
Symmetries and Scale Invariance in Global Maps of Quantum Circuits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Intelligence and the Hard Problem of Consciousness—With ‘Dual-Aspect Theory’ Notes †

Independent Researcher, 5015 Erlinsbach SO, Switzerland
Presented at the 1st International Online Conference of the Journal Philosophies, 10–14 June 2025; Available online: https://sciforum.net/event/IOCPh2025.
Proceedings 2025, 126(1), 4; https://doi.org/10.3390/proceedings2025126004
Published: 10 September 2025

Abstract

To model informatic intelligence, agency, consciousness and the like, one must address a claimed Hard Problem: that a grasp of ‘the mind’ lies wholly beyond scientific views. While this claim is suspect, persistent analogues can be identified in the literature, such as a “symbol grounding problem”, “solving intelligence”, a missing “theory of meaning”, and more. The topic of subjective phenomena thus still holds sway in many corners as being unresolved. But firm analysis of Hard Problem claims is rare; researchers instead respond intuitively, claiming that (1) it is an absurd view unworthy of study, or (2) it is an intractable issue defying study, where neither side offers much clarifying detail. In contrast, this paper firmly assesses the Hard Problem’s claim contra one scientific role: evolution by means of natural selection (EvNS). It examines the specific logic behind this claim, as seen in the literature over the years. The paper ultimately shows that the Hard Problem’s logic is deeply flawed, with the further implication that EvNS remains available for exploring consciousness. The paper also suggests that an ‘information theory’ dual-aspect approach is best suited to resolving Hard-Problem-like claims.

1. Introduction: Statement of Problem and Proposed Approach

The philosopher David Chalmers [1] asserts that a Hard Problem exists in the study of consciousness: mastering issues of ‘the mind’ (informatic intelligence, agency, and the like) will always defy science. Here, the human mind and body are seen as functionally tied but existing apart, where the mind has no verified physical form while the body has a clear physicality. This division of a thinking world (mind) from a material world (body), for Chalmers and others, presents an insoluble point—an explanatory gap “not open to investigation by the usual scientific methods” where a “reductive explanation of consciousness is [therefore scientifically] impossible” ([2], p. xiv).
Related views date back to Anaxagoras’s pre-Socratic ‘mind (nous) and matter’, and to Descartes’s divided ‘mind and body’. Modern analogues convey this as a “symbol grounding problem” [3], “solving intelligence” [4], a missing “theory of meaning” [5], and more. Recent advances in artificial intelligence [6] and notions of coming Super-Intelligence [7], with its attendant challenges, mark a need to re-examine this issue of phenomenal subjectivity, which is often called the Hard Problem in consciousness studies.
To begin, Chalmers sees consciousness as “a scientific problem that requires philosophical methods of understanding before we can get off the ground”, but arrives at “conclusions that some people may think of as ‘antiscientific’: I argue that reductive explanation of consciousness is impossible” ([1], p. xiv). To support this, he asserts a type of conceptual “logical supervenience” over scientific ‘empiric/natural supervenience’ ([1], pp. xiv, 34–35). (Supervene: (of a fact or property) entailed by or reliant on the existence of another, e.g., mental events supervene on physical events. Chalmers pushes this definition beyond ‘fact or property’, adding conceptual (he calls ‘logical’) supervenience, and gives primacy to conceptual logic (fiction) over empiric logic (practice)).
This conceptual split suggests that empiricism (practical experience) is the weaker of the two views. Chalmers makes this claim against all of ‘scientific method’, where science is admittedly unfinished, with at-times puzzling ‘gaps’. Also, mixed partial scientific vistas mean that the only way to assess the Hard Problem’s merit is to gauge its claims against one specific scientific model. As such, this essay uses Darwin’s theory of evolution by means of natural selection, humanity’s most successful scientific model (EvNS), as a touchstone. If the Hard Problem’s claim contra EvNS can somehow be verified, this would be notable.
Nothing in science as a whole has been more firmly established by interwoven factual information. Nothing has been more illuminating than the universal occurrence of biological evolution. Furthermore, few natural processes have been more convincingly explained than evolution by the theory of natural selection [8].
Edward O. Wilson

2. Literature Review: Chalmers’s Case Against Darwinism

Despite EvNS’s status, Chalmers dismisses EvNS as inapt for consciousness studies where the “process of natural selection cannot distinguish between me and my zombie twin. [EvNS] selects properties according to their functional role, and my zombie twin performs all functions I perform just as well” ([1], pp. 120–121). A zombie is defined earlier as “physically identical to me…molecule for molecule…but lacking conscious experiences…all is dark inside…empirically impossible” ([1], pp. 94, 96). This marks a ‘fictional zombie twin versus empiric Chalmers’ thought experiment used throughout his book—presumably tied to the above “logical supervenience” over ‘natural supervenience’. Chalmers’s targeted negation of EvNS, alongside EvNS’s success, makes it ideal for comparative analysis. One presumes that his “empirically impossible” zombie logic will be shown, by means of conceptual “logical supervenience”, to surpass empirical EvNS over the course of the book.
Chalmers asserts fictional–functional zombies as an anti-scientific claim in his only note on EvNS in the book, with no further detail offered. He names no other scientific models for comparative study—he lays out this ‘zombie trope’ and proceeds from there. Chalmers is not alone in using zombie-like devices to portray problems of consciousness [9,10], but his use of fictional zombies to rebut EvNS is singular in the literature. Lastly, this “empirically impossible” zombie rebuttal oddly collides with prior claims to not “dispute current scientific theories in domains where they have authority” and to “take consciousness to be a natural phenomenon…under the sway of natural laws” ([1], p. xiii).
Chalmers’s sparse EvNS remarks prompt a closer study of his book for more insight. With the absence of other EvNS notes, perhaps studying the functionalism he poses against EvNS will help. Functionality arises in Chapter 1.2, prior to the introduction of zombies. Here, he notes that phenomenal events are tied to “subjective conscious experience” with psychological views of phenomenal events, joined as “functionalism and cognitive science”. He then asks “whether the phenomenal and psychological will turn out to be the same [practical] thing” ([1], p. 12). For example, in considering joint phenomenal and psychological roles, he names many likely synchronous/joint functions (e.g., pain, emotions, etc.) that are key to survival.
Chalmers states that the “co-occurrence of phenomenal and psychological properties reflects something deep about our phenomenal concepts” of consciousness, where it is useful “to say that together, the psychological and the phenomenal exhaust the mental…there is no third kind of manifest explanandum” ([1], p. 17). He sees phenomenal–psychological ties ([1], pp. 13, 16–17, 21–22) that also evoke EvNS: selected causal functions via phenomenal input with selectively useful psychological (executive, organizing) functions needed for survival.
However, he then insists that the phenomenal and psychological be held apart, since this divide is “logically possible” (despite prior notes), and thus, must apply equally to consciousness. This imagined logically-possible split is what, in fact, enacts a functional Hard Problem, and presumably supports his “logical supervenience” claim. But this phenomenal–psychological functional split supervenes wholly on his personal opinion in Chapter 1.3, he offers no other support for this view, other than it is just “logically possible”. As such, one either accepts or rejects this “logically possible” split as a core matter, beyond all other logic and beyond empiric sensoria common to survival.
Chalmers continues, “functional analysis works beautifully in most areas of cognitive science”, where many “mental concepts can be analyzed functionally” ([1], p. 46). But as “functionalist analysis denies the distinct” phenomenal/psychological split he requires, it is therefore “utterly unsatisfactory” ([1], p. 24) for studying consciousness. Further, “no matter what functional account of cognition one gives, it [still] seems logically possible” to imagine a scenario that requires no conscious presence (emphasis added, ([1], p. 47)). Later, Chalmers rejects all functional views, for to “analyze consciousness in terms of some functional notion is either to change the subject or to define away the problem” ([1], p. 105). But he never explains how ‘function-free consciousness’ might arise or even inspire his own study of consciousness: why study something with no functional effect—what is the point? His disavowal of functionality also ignores his own call for a fully functional zombie twin, which is key to many of his arguments. This ‘logically possible’ phenomenal–psychological functional split thus wholly underlies Chalmers’s anti-empiric/EvNS claim, instead of a closely reasoned chain of logically supervening roles, that he originally invokes.
Later essays keep to this anti-EvNS view. In Strong and Weak Emergence, Chalmers [11] argues that the emergence of high-level truths like consciousness are not conceptually or metaphysically reliant on low-level truths like survival (i.e., consciousness is “strongly emergent”, not tied to survival). Conversely, he states that arrival of EvNS and genomics is unsurprising (“weakly emergent”, empirically supervenes on survival). But then, he again seems to contradict himself, stating that a “most salient adaptive phenomena like intelligence” is also quite natural ([11], p. 253). How a functionally adaptive intellect differs from his implied super-natural consciousness is never explained. In fact, he never details intelligence or consciousness in a way that allows analysis. He instead points to his zombie trope, “I have argued this position at length elsewhere” ([11], pp. 246–247); thus, Strong and Weak Emergence wholly relies on the prior fictional zombie claim in his 1996 book.
Furthermore, Chalmers ([12], p. 103) begins Consciousness and Its Place in Nature with the following passage: “Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see how it could be part of the physical world. So it seems that to find a place for consciousness within the natural order, we must either revise our conception of consciousness, or revise our conception of nature” (emphasis added). Chalmers again invokes a conceptual split of ‘consciousness contra natural order’, with no new detail or support offered, and again ignoring nature’s stipulated EvNS functioning. This wholly conceptual split again contradicts his earliest 1996 notes on pragmatic phenomenal roles (subjective conscious experience) tied to psychology (functionalism and cognitive science). In later talks, Chalmers maintains this anti-EvNS view, where “zombies could have existed, evolution could have produced zombies” [13], again ignoring EvNS’s functional stipulations in practical survival.
Finally, Chalmers’s distrust of the current ‘science of consciousness’, also shared by others [14], drives a call for “psychophysical laws” as a regular feature of his work [1,11,12]. His argued-for laws are presented as a vital alternative to the natural order. But those laws are never developed; they are only noted as needed to solve persistent questions on consciousness. The neurologist Christoph Koch ([14], p. 124) sees Chalmers’s psychophysical laws as a “crude type of dual-aspect information theory”, set out in an appendix to Chalmers 1996 volume (per Koch), but not actually found in any generally available edition of the book. Prior to 1996 Chalmers [15] did pose a few notes on the nature of information and problems in consciousness studies. But later, the most detail he offers on “psychophysical laws” appears in Chapter 8 of his 1996 volume, but again more as a general exploration of ‘information’ rather than framing any actual new psychophysical laws.
There is something extremely puzzling about the claim that consciousness plays no evolutionary role, because it is obvious that consciousness plays a large number of such roles. ([16], p. 63)
John Searle

3. Discussion of the Literature: Chalmers’s Framing

How do we relate the above to consciousness and EvNS? Chalmers plainly sees them as apart, where his only evidence is a zombie twin, a fictional–functional device from his only notes on EvNS. Thus, if we are to accept that EvNS is inapt for studying consciousness, we must accept a logical superiority (or logical possibility) of fictional zombies over scientific theory. This leads one to ask: how well-supported is that logical fiction? Chalmers holds to a line of thought that is, by his own words, “not defined in terms of deducibility in any system of formal logic” ([1], p. 35); an odd self-disavowal we can hardly ignore. His fictional–functional zombie claim is also an odd gambit for one wanting to abide by accepted science and natural laws, as noted earlier. Lastly, his zombies present a catalog of other issues.
Apropos Chalmers’s zombie twin, Frigg and Hartmann ([17], p. 11) note that “The drawback…is fictional entities are notoriously beset with ontological riddles. This has led many philosophers to argue that [such devices]…must be renounced.” More problems with zombie-like devices are presented elsewhere in the literature [9]. For example, neuro-anthropologist Terrence Deacon ([10], pp. 40, 45) states that the use of “homuncular representations”, akin to zombies, “can be an invitation for misleading shortcuts of explanation…although [the ‘map’] is similar in structure to what it represents it is not intrinsically meaningful…the correspondence…tells us nothing about how it is interpreted…Appealing to an agency that is just beyond detection to explain why something [works] is an intellectual dead end in science. It locates cause or responsibility without analyzing it.”
Still, Chalmers’s argument is a thought experiment, a long-used intellectual tool requiring fictional devices. The problem is that his zombie trope is imprecisely drawn. Chalmers defines his zombie as “lacking conscious experiences altogether…all is dark inside,” while claiming that this “zombie twin performs all functions I perform just as well.” Does Chalmers envision this zombie in a role writing a piece on consciousness ‘just as well as he’? And with what for content? It is empty inside! Such complex cognitive functioning cannot be explained without ‘content’ as accrued life experience or working memory. Chalmers’s claim of an “empty” but fully functional zombie as “logically possible” is highly unlikely.
More imprecision is evident when Chalmers ([1], p. 35) poses a zombie twin, asking what “would have been in God’s power (hypothetically!) to create, had he chosen to do so. God could not have created…male vixens” due to an innate gender conflict. But Chalmers’s God is unfazed by even-more-contrary ‘living death’ in zombies, and also ignores humanity’s gender-bending habits (e.g., hermaphrodites, transsexuals, transvestites, bisexuality, hijras, LGBT, etc.). Chalmers’s claim of God-like logical possibilities is thus deeply flawed, especially “in terms of its truth across all logically possible worlds” as a “notion of conceptual truth” to support “[k]ey elements of [his] discussion” ([1], pp. 52–54).
Chalmers ([1], pp. 94–97) later elevates zombies to something “quite unlike the zombies [of] Hollywood movies”, with “significant functional impairments.” A “creature [that] is molecule for molecule identical…. He will certainly be identical to me functionally” and “psychologically identical” too, but lacking conscious experience—a phenomenal zombie. Here, “the logical possibility [of such] zombies seems equally obvious” as that of a mile-high unicycle. But this zombie–unicycle comparison ignores the functioning that makes a “unicycle” a unicycle: a one-wheeled vehicle acrobatically ridden by a human. Chalmers’s mile-high unicycle is logically and practically impossible, as the necessarily massive device could not even be lifted or ridden by any ordinary human. (Such a unicycle would require CroMoly 1.5" tubing for lightweight frame building, with a wall 0.035" or 0.25" thick, weighing 2878 or 17,541 pounds [18] for a one mile length. Several mile-long tubes would be needed. Also, this omits the necessarily large wheel and axle needed to support that massive device, and a necessarily enormous drive chain).
Chalmers sees the risk in mixing conceptual (zombie) and empiric (unicycle) views, citing W. V. Quine “who argued that there is no useful distinction between conceptual truths and empirical truths” ([1], p. 52). But he repeatedly ‘solves’ such conflicts by mixing fictional logical possibilities with nature’s functional facts. By one count, he holds psychological and empiric phenomenal roles as innately tied, but then insists they be held apart for unclear reasons, and then returns to tie his ‘phenomenal zombie’ to a presumably empiric unicycle, which actually presents yet another impossible fictional device. This is how Chalmers ultimately supports his functional zombie twin as “logically possible”.
Beyond imprecision, in the whole of his work, Chalmers never defines consciousness. The closest he comes is this: “What is central to consciousness, at least in the most interesting sense, is experience. But this is not a definition. At best, it is a clarification…to define conscious experience in terms of more primitive notions is fruitless” ([1], pp. 3–4). So, when he deprives his zombie twin of consciousness, what exactly is it denied? If this zombie has no functioning sensorium for aforementioned ‘experience’, how does it exist, see, smell, eat, walk, or talk? What furnishes base experiential inputs or ‘feedback’, how is functioning realized but simultaneously absent? Again, as Frigg and Hartmann suggest, even basic self-regulation cannot be explained. Chalmers’s view of consciousness starts to seem more ‘fully fictional’ than it is ‘fully functional’.
As Chalmers never firmly defines consciousness, intelligence, or zombies, clarifying this debate is hopeless. He can make whatever ‘zombie claim’ he wishes with impunity, since it is an argument impossible to refute, having no firm bounds or logic—it is an untestable and unfalsifiable proposition [19]. Later, Chalmers even uses the phrase “has the consciousness of a zombie” ([1], p. 254). So, a clear sense of what a zombie is, conscious or not, is plainly absent. Further, the lack of even a working definition for consciousness deprives us of the bare taxonomy needed for useful study, with taxonomic naming being key to any formal endeavor [20]. Still, some believe that the precise naming of consciousness is unlikely [14,21]. But if no formal name is possible, there is little point in studying ‘that which we cannot even name’. Ascribing unnamable or supernatural traits to consciousness is oddly reminiscent of a much earlier mystical, pre-scientific era. Moreover, Chalmers’s repeatedly contradictory, mixed, and vague assertions sharply hinder honest appraisal of his position.
Chalmers ([1], p. xiv) introduces his thesis by stating that “one technical concept that is crucial to [my argument] is that of supervenience”. But close reading shows that a contrived logically possible phenomenal–psychological split is his one key concept. Chalmers sees the risk in this conceptual claim, noting that the “way in which conceivability arguments go wrong is by subtle conceptual confusion: if we are insufficiently reflective we can overlook an incoherence in a purported possibility…[revealing] that the concepts are being incorrectly applied” ([1], pp. 98–99). But he ignores his own imprecision in using zombies to oppose EvNS. Instead, he makes statements that at first sound thoughtful, but ultimately prove to be a smoke-screen for a further implausible logically possible fictional function. Chalmers’s weak logically possible argument makes any wider claims arbitrary. Moreover, myriad logically possible fictional counter-claims can be made to defeat his arguments: ‘Chalmers’s zombie could never function as said, since it would be constantly harried by flying monkeys (obviously!)’ While perhaps entertaining, heavily fictional logical accounts offer no basis for understanding difficult real-world/empiric topics like consciousness.
Chalmers ([1], p. 110) sees that there “is certainly a sense in which all these arguments are based on intuition, but [he tries] to make clear just how natural and plain these intuitions are, and how forced it is to deny them”. But what is forced is his conceptual split in logical and empiric realms, where ‘logic’ actually supervenes on verifiable empiricism—and if not empiricism, upon what does Chalmers’s logic supervene: mere fantasy? Chalmers must answer this question if he is to offer more than mixed notions of what is logical and what is possible. But as he is vague on these points, his original claim of logical supervenience is a false implied supervenience thesis or FIST [22].
Lastly, Chalmers never offers an alternative to EvNS. This complete lack of a practical framing for consciousness makes his Hard Problem hard. He denies us EvNS as a likely tool (causal functions) but offers no substitute beyond his empty “psychophysical laws”. Yes, this is indeed a hard problem. His “empirically impossible” zombie logic abruptly plants us within a fictional terra incognita, an informationally spare state defined only by the author’s allowances. It’s unclear how even Chalmers can advance his own psychophysical laws in a serially coherent manner, as he never offers any actual detail.
Chalmers’s view is perhaps best-known of other similar claims where equal critical review is needed. For example, philosopher Daniel Dennett [23] often argued the opposite of Chalmers: that consciousness is non-existent, an ultimate illusion. But Dennett’s ‘consciousness is pure fiction’ view fails for the same reason that Chalmers’s view fails: it ignores EvNS as a long-standing central organizing principle for all life.
Still, interestingly, Chalmers, Dennett [24], and others [25] all suggest an information-theory-type approach for grasping consciousness. Giulio Tononi’s [26] Integrated Information Theory is perhaps the best known of these. These ‘information theory’ views often invoke Claude Shannon’s [27] signal entropy as a likely basis, but gloss over major issues, such as Shannon and Weaver’s [5] missing “theory of meaning” with “disappointing and bizarre” facets. Shannon [28] also saw his work as being “…ballooned to an importance beyond its actual accomplishments…with an element of danger.” The ensuing quest for a general ‘theory of meaning’ has a long history [29], persisting to today [30,31]. In any case, the need for a general ‘theory of meaning’ (a basis for all subjective phenomena) better constitutes a ‘hard problem’ than does Chalmers’s “logically possible” zombie-world.

4. Discussion, Conclusions, and Next Steps

Chalmers rightly diagnoses a difficulty in consciousness studies, where present ‘science’ adds little understanding. But his zombie argument is unhelpful and fails for four reasons: (1) his zombie twin wholly lacks actual supervenient support, (2) no absolute scientific Theory of Everything exists against which to test his general anti-scientific claim, (3) science holds fragmented serial discoveries and innovations, where reinvention is a core tenet, and lastly, (4) diverse functional roles must be considered to verify his claim, but he only offers a fictional zombie analysis, with no actual functional support (despite claiming otherwise).
As an alternative ‘non-zombie thought experiment’—imagine a factory making computer hard-drives (HDDs). Each HDD leaving that factory is “molecule for molecule identical…will certainly be identical…functionally…and psychologically identical”. Next imagine one HDD is installed in a laptop you buy, complete with a ‘phenomenal–psychological’ functioning operating system and applications. That HDD is now subtly different from all others but still follows Chalmers’s “…all is dark inside” and “molecule for molecule identical” (i.e., no ‘experience’). Later, after years of use, you see that your laptop’s HDD holds a lot of very personal data (‘experience’).
Finally, if we return to that ‘HDD-zombie factory’, we can now ask when, if, why, and how does your laptop become a ‘zombie twin’, apart from other laptops? When, “at least in the most interesting sense, is [your] experience” seen to arise in the laptop? As modern humans, we grasp that the essential matter lies in the HDD’s encoded data, with two aspects: (1) organized/logical bits as encoded data (HDD metadata, binary 1/0, etc.,) as ‘bare information’, and (2) electromagnetic charged-bits on the HDD’s platter, that convey encoded data. Together they suggest an informatic ‘dual-material-aspect’ we might loosely tie to a immaterial–material (mind-body) duality, and equally tied to HDD ‘experience’.
Of the above dual aspects, the first invokes informatic order/logical signs (versus signal ‘noise’), and the second points to encoded data held via ‘material properties’—where all matter has energetic facets, specifically electromagnetic in this case. Here, starting with the Standard Model of particle physics, many types of ‘unseen’ subtle energetic aspects are mapped, with bosons (photons, gluons, etc.) as force carriers, mediating all fermion (matter) interactions. From here onward, all of nature’s “order for free” [32] has a dual-aspect as embedded matter-and-force, where ‘force’ may appear to be ‘immaterial’ (invisible), but which is actually tied to all of material reality.
Beyond an HDD’s deeply dualist ‘informatic logic’ and ‘energetic matter’, other researchers target a neurologic structure for ‘brain as mind’. But if the brain’s neurons hold encoded data (akin to an HDD), we see some 100,000 neurons per cubic millimeter of brain, with many neuron types, all of which we poorly grasp. When or if we will ever find the tools needed to decipher this minutely entangled/encoded neuronal mass is unknown, but the same is not true for HDDs. As such, some people say that “engineers will ultimately solve the problem of consciousness” [33].
All of the above points to a need to clarify ‘informatic intelligence’, akin to consciousness—a need that grows with every advance in artificial intelligence and the presumed coming super-intelligence [34]. Thus, the next step needed here is further study on the nature of information and a “theory of meaning”. Still, this paper’s contribution is twofold: (1) it leaves EvNS available as a scientific basis, and (2) it more clearly points to “information theory” as a likely path for framing scientific solutions. Ensuing work on detailing needed informatic tools, and the like, is pursued in my other essays and talks [35]. See related videos at https://www.youtube.com/@MarcusAbundis (accessed on 8 September 2025).

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Chalmers, D.J. The Conscious Mind: In Search of a Fundamental Theory; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  2. Chalmers, D.J. Moving forward on the problem of consciousness. J. Conscious. Stud. 1997, 4, 3–46. [Google Scholar]
  3. Harnad, S. The symbol grounding problem. Philos. Explor. 1990, 42, 335–346. [Google Scholar] [CrossRef]
  4. Burton-Hill, C. The superhero of artificial intelligence: Can this genius keep it in check? The Guardian 2016. Available online: https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago (accessed on 31 January 2016).
  5. Shannon, C.E.; Weaver, W. Advances in a Mathematical Theory of Communication; University of Illinois Press: Urbana, IL, USA, 1949. [Google Scholar]
  6. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  7. Yampolskiy, R.V. Artificial Superintelligence; Chapman and Hall/CRC: New York, NY, USA, 2015. [Google Scholar] [CrossRef]
  8. Wilson, E.O. The four great books of Darwin. In Proceedings of the Sackler Colloquim: In the Light of Evolution IV; National Academy of Sciences: Washington, DC, USA, 2009; Available online: http://sackler.nasmediaonline.org/2009/evo_iv/eo_wilson/eo_wilson.html (accessed on 31 January 2014).
  9. Kirk, R. Zombies. In The Stanford Encyclopedia of Philosophy, Spring 2012 ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2012. [Google Scholar]
  10. Deacon, T.W. Incomplete Nature: How Mind Emerged from Matter; W.W. Norton & Co.: New York, NY, USA, 2013. [Google Scholar]
  11. Chalmers, D.J. Strong and weak emergence. In The Re-Emergence of Emergence: The Emergentist Hypothesis from Science to Religion; Clayton, P., Davies, P., Eds.; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  12. Chalmers, D.J. The Character of Consciousness; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  13. Harris, S. 34-The Light of the Mind: A Conversation with David Chalmers. 2016. Available online: https://samharris.org/podcasts/the-light-of-the-mind/ (accessed on 31 October 2018).
  14. Koch, C. Consciousness: Confessions of a Romantic Reductionist; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  15. Chalmers, D.J. Facing Up to the Problem of Consciousness. J. Conscious. Stud. 1995, 2, 200–219. [Google Scholar]
  16. Searle, J.R. Mind, Language, and Society: Philosophy in the Real World; Basic Books: New York, NY, USA, 1998. [Google Scholar]
  17. Frigg, R.; Hartmann, S. Fiction and scientific representation. In The Stanford Encyclopedia of Philosophy, Spring 2006 ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2006; Available online: http://plato.stanford.edu/entries/models-science/ (accessed on 31 January 2014).
  18. Metals, O. 4130: Alloy Tube Round. 2012. Available online: http://www.onlinemetals.com/merchant.cfm?id=250&step=2 (accessed on 3 July 2014).
  19. Shuttleworth, M. Falsifiability. 2008. Available online: http://explorable.com/falsifiability (accessed on 2 February 2014).
  20. Ford, D. The Search for Meaning: A Short History; University of California Press: Berkeley, CA, USA, 2007. [Google Scholar]
  21. Greenfield, S. The neuroscience of consciousnesss. In Proceedings of the Towards a Science of Consciousness Conference, Hong Kong, China, 10 June 2009; Available online: http://thesciencenetwork.org/programs/raw-science/the-neuroscientific-basis-of-consciousness (accessed on 31 January 2014).
  22. McLaughlin, B.; Bennett, K. Supervenience. In The Stanford Encyclopedia of Philosophy, Winter 2011 ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2012. [Google Scholar]
  23. Dennett, D.C. Consciousness Explained; Little, Brown and Co.: Boston, MA, USA, 1991. [Google Scholar]
  24. Dennett, D.C. A Difference That Makes a Difference: A Conversation with Daniel C. Dennett. The Edge 2017. Available online: https://www.edge.org/conversation/daniel_c_dennett-a-difference-that-makes-a-difference (accessed on 9 September 2025).
  25. Fallon, J. Integrated Information Theory of Consciousness. The Internet Encyclopedia of Philosophy (ISSN 2161-0002). Available online: https://iep.utm.edu/integrated-information-theory-of-consciousness/ (accessed on 9 September 2025).
  26. Tononi, G. Phi: A Voyage from the Brain to the Soul; Pantheon: Rome, Italy, 2012. [Google Scholar]
  27. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  28. Shannon, C.E. The Bandwagon. IRE Trans.-Inf. Theory 1956, 2, 3. [Google Scholar] [CrossRef]
  29. Carnap, R.; Bar-Hillel, Y. An Outline of a Theory of Semantic Information; Technical Report No. 247; House Publication: London, UK, 1952. [Google Scholar]
  30. Floridi, L. Semantic conceptions of information. In The Stanford Encyclopedia of Philosophy, Spring 2017 ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2017; Available online: https://plato.stanford.edu/archives/spr2017/entries/information-semantic/ (accessed on 29 January 2018).
  31. Lundgren, B. Does semantic information need to be truthful? Synthese 2019, 196, 2885–2906. [Google Scholar] [CrossRef]
  32. Kauffman, S.A. Investigations; Oxford University Press: New York, NY, USA, 2003. [Google Scholar]
  33. Behar-Montefiore, A.A. (University of Arizona, Tucson, AZ, USA). Personal exchange at The Science of Consciosness conference, Tucson, Arizona. 30 April 2016. [Google Scholar]
  34. Kurzweil, R. The Singularity Is Near; Wiley Interscience: Hoboken, NJ, USA, 2005. [Google Scholar]
  35. Abundis, M. An A Priori Theory of Meaning. In Understanding Information and Its Role as a Tool; World Scientific: Singapore, 2025; Chapter 1; pp. 1–18. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abundis, M. Intelligence and the Hard Problem of Consciousness—With ‘Dual-Aspect Theory’ Notes. Proceedings 2025, 126, 4. https://doi.org/10.3390/proceedings2025126004

AMA Style

Abundis M. Intelligence and the Hard Problem of Consciousness—With ‘Dual-Aspect Theory’ Notes. Proceedings. 2025; 126(1):4. https://doi.org/10.3390/proceedings2025126004

Chicago/Turabian Style

Abundis, Marcus. 2025. "Intelligence and the Hard Problem of Consciousness—With ‘Dual-Aspect Theory’ Notes" Proceedings 126, no. 1: 4. https://doi.org/10.3390/proceedings2025126004

APA Style

Abundis, M. (2025). Intelligence and the Hard Problem of Consciousness—With ‘Dual-Aspect Theory’ Notes. Proceedings, 126(1), 4. https://doi.org/10.3390/proceedings2025126004

Article Metrics

Back to TopTop