1. Introduction
I have often accompanied artists as they browse the shelves in their favourite art supply store. Not because I am an artist myself, but I know a few very well. Conjuring up as much patience as I can, I see nothing but pots of paints, rows of pencils, some pastels, and boredom quickly sets in. The artists, on the other hand, love it and spend an inordinate amount of time reflecting on the qualities and properties of this or that paint, pencil, or pastel (they also have a lot to say about paper weight and sketchbooks), and the things that they could do with one but not the other. They tell stories of aborted projects or more successful ones; the stories invariably cast materials as key players in the unfolding of these projects. These artists deeply understand the hybridity of art, that its production involves a cast of many. The process of art making is participatory in that all the elements involved in the enactment of an artwork participate in its instauration. That the artist plays a central role as a human actor goes without saying, but the artist understands that other members, non-human objects and substances, play essential roles, too. Art making is a co-production.
As a non-expert, I, along with many others, can note the difference between, say, an oil painting or a watercolour one or between a sketch with pastels and one drawn with pencils. The finished artworks showcase the unique properties of the media in which they were realized. These different materials not only produce different effects, they behave differently. The tools and techniques cued and primed by the different media are different, too. The way in which you apply oil paint is not the same as working with watercolour. The affordances enacted by the coupling of artist with media have a unique and dynamic character. As she layers colour onto her watercolour painting, the artist might be heard saying, ‘you’ve got to let the paint do what it wants to do’, as she waits for a layer to dry to observe the result; this outcome then acts as guidance for the next layer. Of course, paint has no beliefs or desires, no intentionality or agency. Yet, it is an actant in semiotic parlance: an actant is anything that alters the course or unfolding of an event, “whatever acts or shifts actions” [
1]. Thus, a hammer is an actant in the task of hitting a nail, since doing so without one would transform the manner with which the task is carried out.
1We stand back from the finished artwork; we see a blooming rose emerging from dense foliage (oil paint) or sailboats anchored at sunset (watercolour). Clearly, the artist’s work was inspired and guided by an idea or an image, mental or otherwise, in producing the readily interpretable objects in the depicted scene. We naturally and understandably slip into a world of ideas and mental imagery, empowering these cognitive phenomena with causal properties that determine and control the action of the artist. The agency of the materials that she employed is effaced, or perhaps we never even considered their agency in our naïve appreciation of art marking. The artist in her own retrospective account may also narrate the work from a plane of ideas and mental imagery (at least to a naïve onlooker, but perhaps not so much when discussing her work with other artists). The gallerist will too, through his interactions with visitors and possible buyers, selling a story of ideas and inner visions along with the artwork (imbuing the work with meaning beyond the material depiction of the object). Yet, a granular capture of the artwork in production, as it takes form in the messy atelier, reveals the countless micro decisions that substantially affected the work, micro decisions that are either loosely linked to, or entirely omitted from, the retrospective retelling of the artwork ([
3], Chapter 6, provides a detailed account of the contingent instauration of a work of art that cannot be explained by resorting to the artist’s ‘idea’ at the start of the process). These micro decisions were triggered by the material employed, as well as the current state of the painting in its current phase of instauration. This ethnography of artwork-in-the-making reveals the crucial importance of nonhuman actants in the unfolding of the artwork. Art made versus art-in-the-making cues very different accounts of the process of making art.
2. The Diffusion Model of Ideas and the Quest for Exceptionalism
At a conference on creativity in Paris (October 2024), I listened to a fascinating presentation on the OECD’s Programme for International Student Assessment (PISA), which recently sought to measure creative thinking in adolescents across the world. A standardised test was developed to serve this herculean undertaking (690,000 15-year-olds in 81 countries). The tests are designed to measure divergent thinking (generate ideas) along with some evaluative executive control (evaluate and improv ideas). For example, students are prompted to provide three titles for an image or three titles from a book cover, or to think of ways of improving accessibility for disabled users of a library. Evaluative test items invite students to improve on the design of a poster or evaluate and improve carpooling suggestions. The results are quite interesting and are detailed in a 300-page report (OECD [
4]). Here is how PISA defined creative thinking in this report (p. 47): “the competence to engage productively in the generation, evaluation and improvement of ideas that can result in original and effective solutions, advances in knowledge and impactful expressions of imagination”. There is an interesting jump that is made in this definition from a cognitive ability (the generation and evaluation of ideas) into concrete translations of such ideas into ‘effective solutions’ and ‘impactful expressions’. This transition reflects a diffusion model of ideas [
5]: an idea once formulated is diffused with little or no transformation as it is materially reified and as it is ‘understood’ and used by an audience of interested interlocutors. A diffusion model of ideas reinforces the importance of studying abilities to generate ideas, since ideas can causally impact the world (from this diffusion perspective).
There is much to unpack here, starting with an old dualist chestnut, namely, the chasm between the mental and the physical. Rather than a diffusion model—where ideas are the hot knives that cut through buttery reality—a translation model proposes that an initial hunch is translated into a material object, be it an architectural drawing, an engineering blueprint, an essay draft, a sculptor’s maquette, a foam mock-up, or a software demo. This object is then interrogated, and the way it looks or behaves seeds new questions and exposes new uncertainties, which then guide new actions (and changes to the object) and a reformulation of the idea. As we proceed along these iterative cycles of ideation and material translation, the hunch that initiated this recursive process no longer acts as the cause that threads the subsequent iterations, since it has been translated and transformed. New knowledge is now materially embodied, acting as a boundary object of sorts [
6] (see also [
7]) that serves as the platform from which to launch the next iteration in this translation cycle. The ‘effective solutions’ and ‘impactful expressions’ are not explained by creative ideation at the start of this chain but are rather the result of these iterative translations. The explicative resources of a cognitivist account are insufficient, since a focus on the creator’s internal cognitive processes fails to take into consideration how objects and their transformation play a role in shaping these cognitive processes. The role of objects in these cyclical translations is critical, and the scholarship on creativity that ignores it, focusing only on ‘creative thinking’, will largely fail to explain how innovation and creative problem-solving take place (cf. Green et al.’s [
8] ‘process definition’ of creativity is still in terms of internal/mental mechanisms). I will return to this point since it represents the keystone of a systemic account of creative problem-solving.
PISA’s creative thinking test is also based on a distinction between big and little creativity (Big-C and little-c [
9]; note here the switch from cognition to creativity, a transition that is not so innocent). Little creativity is everyday creativity (e.g., “find solutions to day-to-day problems” OECD, p. 47), while big creativity is “associated with intellectual or technological breakthroughs or literary masterpieces that require deep expertise in a given context” (OECD, p. 47). Big-C is steeped in exceptionalism: those displaying Big-C are endowed with cognitive superpowers that few of us possess. Little-c on the other hand is more pedestrian and is amenable to standardised testing, tests that essentially measure divergent and convergent thinking skills, as envisaged by Guilford [
10] and Mednick [
11], respectively. Such tests, and the ensuing rank ordering of performance that they generate (e.g., OECD, Table III.1 “Snapshots of performance in creative thinking” across the world, p. 26, Singapore first, Canada third, Denmark eighth, etc.) is also motivated by a quest for exceptionalism: students in some countries score above the OECD average, others inevitably below. It is the logic of measurement that reifies this quest. My critical reflections here are not on the value of such tests or the psychometric method that is deployed (well, certainly not in terms of its reliability), but how the nature of the results and rank ordering place some at the top and others at the bottom: questions are then inevitably formulated by educators and policymakers to understand how to climb up this ladder. Since the tests do not involve much, if any, interactivity, that is, they do not involve translating ideas into objects and hence cannot trace the iterative translation process (outlined above) that is the engine of innovation, they simply cannot tell us anything about creativity, big or little. What is left is the quest to improve creative cognition test performance, a quest for exceptional cognitive abilities. That there are exceptional individuals (exceptional artists, writers, athletes) is not rejected here. The play making and positioning of an exceptional rugby player during a rugby match cannot be explained if the behaviour of the ball is not factored into the equation. One may then argue that iterative prototyping from exceptional individuals yields more exceptional products, but the account of creativity is still measured in terms of the contingent nature of the discovery process that is mapped out through iterative prototyping (and not in terms of a priori exceptionalism).
This Big C and little c distinction obscures, rather than clarifies the process through which creativity manifests. That there are differences, gigantic ones at that, between scientific breakthroughs and creative do-it-yourself home repairs is obvious. This seems to call for equally gigantic (cognitive) causes. These epistemological cuts (to adapt Latour [
12]) between science and DIY, between quotidian and scientific reasoning, or between modern science and ethnoscience, invite an explanation in terms of cognitive differences. Before embarking on a quest—a quixotic one, I might add—for gigantic differences in cognitive processes, we should seek to describe more mundane and materially anchored processes that ratchet otherwise unexceptional abilities to produce exceptional epistemological outputs. In a foundational paper (Visualization and cognition: thinking with eyes and hands), Latour [
13] describes the costs and resources involved in the material scaffolding from which scientific inferences can be made. Complex phenomena are mobilised and their forms made immutable in two-dimensional images or graphical representations that can be inspected. Enormous resources are invested over long periods to produce maps of the infinitely big (galaxies) and the infinitely small (a plant’s genome); thinking is performed with and through these inscriptions, these scalar transformations from which simple perceptual judgments can be made, not the raw untreated, unprocessed phenomena in the ‘real world’. Once mobilised, reduced, and transcribed (at enormous costs, and with metrology playing an essential role), their recombination yields brand new vistas from which we
see new things. Latour (p. 21) writes:
“One aspect of these combinations is that is possible to superimpose several images of totally different origins and scales. To link geology and economics seems an impossible task, but to superimpose a geological map with the printout of the commodity market at the New York Stock Exchange, requires good documentation and takes a few inches. Most of what we call ‘structure’, ‘pattern’, ‘theory’, and ‘abstraction’ are consequences of these superpositions”.
We should, of course, accept or concede differences in scale but reduce the differences in causes. Our goal should be to develop explanations for these scalar differences (e.g., Big C and little c) that provide the most leverage for a minimum investment in the cognitive abilities of an individual agent (not dissimilar to a return-on-investment measure; Schrage [
14]. Thus, one key principle of our working method is to reject, a priori, distinctions such as Big C and little c that call for big and little cognitive processes. It is true that there are gigantic differences in output, but cognitive exceptionalism should be rejected. Weisberg has offered many case studies of exceptional creativity in a range of domains (e.g., art, architecture, engineering, epidemiology), illustrating how these original products were constructed on the basis on non-exceptional cognitive abilities, such as near-analogies with previous work (e.g., see his analysis of Picasso’s
Guernica in Weisberg [
15]; the argument against exceptionalism is presented in Weisberg [
16,
17]). Any form of exceptionalism undermines the systemic argument. It behoves researchers to provide a description of how the association and interaction among a set of heterogenous elements give rise to a creative idea or product (it is also much more empirical and traceable); that description will carry a much fuller account of the emergence of creativity than could be achieved by simply resorting to any form of a priori exceptionalism.
3. From Ideation Ground Zero to Enacted Ideation: Prototyping
“Designing without prototyping doesn’t make any sense. (…) it is mandatory to use prototyping as a tool. Not a tool to refine ideas but a mean to construct ideas”.
At the apex of their art, the likes of Michelangelo, Andrea Palladio, Henry Ford, and James Dyson built prototypes; in the case of Dyson, over 5000 of his cyclonic vacuum cleaners [
19]. Great art, architecture, manufacturing, and engineering is not simply a creative cognition story (i.e., simply an ideation story), it is a story of prototyping: “Prototyping is ubiquitous in the development of innovative products, services and systems” ([
20] p. 23). In their review, Camburn et al. identify four key contributions of prototyping: (i) exploration, (ii) active learning, (iii) refinement, and (iv) communication (see Figure 2, p. 3). The iterative cycle of production–evaluation–modification–reconstruction ([
19] traces a non-linear path to the finished product). The more concrete the prototypes, “the easier it is to converse with them and have them tell you what makes your idea wrong” ([
21] p. 100). It is the old Edison trope: through prototyping, you do not fail a thousand times to create something that works, you
discover a thousand ways in which it does not. In his book
Creative Selection, Kocienda ([
22], pp. 154–156) offers these reflections on the demos that were built in the process of developing the first iPhone (the secret
Project Purple as it was called then; Kocienda worked on the keyboard, which eventually took the familiar qwerty layout, but we should remember how texting was achieved before smart phones; i.e., letter triads or quads in alphabetical order crowded each digit on the dialling pad):
“At Apple (…) demos made us react, and the reactions were essential. Direct feedback on one demo provided the impetus to transform it into the next. Demos were the catalyst for creative decisions, and we found that the sooner we started making creative decisions, whether we should have big keys with easy-to-tap targets or small keys with software assistance, the more time there was to refine and improve those decisions (…) concrete and specific demos were the handholds and footholds that helped boost us from the bottom of the conceptual valley so we could scale the heights of worthwhile work” (the emphasis is mine).
Prototypes are a source of ideas; prototyping is a key process of ideation: “prototypes are critical for inquisitive exploration of concepts, as they enable organic learning and discovery (…) prototyping leads to new design ideas. Traditional development process can be
inverted to use prototyping for ideation” ([
20] p. 6, my emphasis). The inversion here is about the direction of the narrative about innovative creativity: it is not explained by an initial idea, it is explained by its translation and transformation through prototyping
2. Prototypes are provocative [
23] and ‘purposeful objects’ ([
24] p. 129); their current embodiment spells out their next transformation. Prototypes are actants in the scenography of innovation; they are ‘partners in thinking’ ([
23] p. 85). Thus, it is in the making that innovations are produced, and the explanation of creativity starts by first capturing this iterative translation process of making.
The research and scholarship on prototyping reverse the causal directionality of what might be called a hylomorphic model of creativity and innovation [
25]. In such a model, an idea originates in the mind of the creator, and this idea is then the cause that shapes the innovative product. The making part is mere implementation (“standard engineering” as Arthur [
26] p. 121, phrases it). Prototyping suggests that it is in the making that ideas emerge, or what Buchman [
27] simply calls ‘make to know’. Let us contrast this approach with creative cognition research. Here, the focus is on idea generation and idea selection measured with standardised tests of divergent thinking and executive processes. This, in turn, motivates the development of neurocomputational models of creative cognition (e.g., [
28]), synthesising efforts to trace the brain networks that correspond to these cognitive operations (e.g., [
29]) or efforts to enhance these cognitive processes with transcranial direct current stimulation [
30] and network science applied to the structure of semantic memory [
31]. These cephalocentric efforts are predicated on a diffusion model of ideas where ideas have a vis inertia; that is, ideas that are mentally conjured up have an immutability, a force, that causally impacts the world. The cognitivist bet is that such a focus on mental ideation (and its neural underpinning at the point of its appearance in consciousness) deserves an investment in research and modelling resources because it holds the key to explaining creativity and innovation.
These efforts are steeped in methodological individualism and neurocentrism. Methodological individualism is antithetical to a systemic account of creativity in that it encourages (and blinds) researchers to only consider the individual decoupled from the system in which they are embedded. Neurocentrism, in turn, is an inordinate fondness for neuroscientific ‘explanations’ of creativity that exclude the role of objects. These efforts are predicated on an impervious separation between subjects and objects, people and things, plagued by (but ignoring) the unresolvable dualism that it implicitly promotes. Research proceeds by isolating people from objects; the standardised tests that operationalise creative cognition are devoid of interactivity. Since ideation ground zero is where creativity originates, efforts are made to isolate it from the worldly interactivity that may contaminate its pure assessment. Agency and intentionally are not problematised; in fact, these research efforts are predicated on this unassailable and compelling intuition that validates a rigid ontological separation: people have agency and intentionality, objects have neither. Hence, their hybridised meshing is rejected a priori. These efforts also implicitly—and at times explicitly, as in the OECD report mentioned earlier—promote a form of exceptionalism, which, in turn, is motivated by a tautology: (exceptionally) creative people have (exceptionally) creative ideas. As with any regress, it is as unproductive as it is empty of any explanatory value (it is a virtus dormitiva).
Creativity is a process, not a feature or characteristic of a person’s brain or personality. Creativity accrues in making, taking shape in a metamorphic zone (to adapt Latour, [
32]) where unpassable boundaries are replaced with porous borders [
33], a place of exchange through which people and objects co-define each other. The argument developed here follows Latour’s [
5] seventh rule of method from his book
Science in Action, which calls for a moratorium on cognitive explanations of science and technology. Latour is not, however, promoting radical behaviourism; he acknowledges that people have ideas and mental representations (of course). The key question is whether ideas explain the sequence of experimental manipulations and engineering adjustments leading to discoveries or whether they are shaped by practice—specifically, actions and interactions with things. If the latter is true, then we need to examine how humans interact with various elements, objects, and technological artefacts that shape the environment where thinking occurs. Latour’s seventh rule suggests that before turning to cognitive explanations for innovation, we should first provide a detailed description of the interactions within the system. These complex interactions highlight the distributed, extended, and hybrid nature of cognition. Thus, the argument presented here aligns with the 4E perspective on creativity, viewing it as an embedded, enacted, and extended process.
Next, I will offer a theoretical vocabulary and outline a set of methodological principles, drawing heavily on science and technology studies and prototyping, to illustrate how creative problem-solving can be mobilised under laboratory conditions, revealing how ideation is constructed out of interactions with objects and physical models of problem solutions. As I hope to demonstrate, research on creative problem-solving that is systemic in character is better placed to explain the emergence of new ideas.
4. Mobilising Creative Problem-Solving: Methodological Proposal and Illustration
Drawing from science and technology studies, the starting point of the analysis is an assumption, one of symmetry among actants [
2]. In semiotics, as I mentioned earlier, an actant is anything that alters the course of an event. Thus, actants can be human or non-human; the assumption broadens the remit of the inquiry to include objects in the unfolding of cognition, with interesting theoretical consequences for the nature of cognitive processes and equally important consequences for the methods that are employed to trace them (in other words, methods that allow their expression, since methods are performative [
34]). While the actants that shape the unfolding of an event may have radically different ontologies, the investigative assumption sets asides these concerns; the fear that one is making a fatal category mistake is kept at bay through a complementary move: in order to understand cognitive processes as they take place in the world, boundaries should be blurred, and the investigation should rise above a priori “disciplinary organizations of knowledge”, as Bijker and Law [
35] phrase it.
The investigation borrows and adapts two key analytic approaches from psychological qualitative research methods: epoché and horizontalisation. Thus, ethnographers of cognition should bracket some common
pre-conceptions, namely: (i) the genesis of creative ideas is a neurological story (and ultimately the genesis can be reduced to a neurological substrate); (ii) a mental representation produced by this neurological substrate holds the key; (iii) creativity is the product of a mental process that restructures this representation into a novel one; (iv) hence, there is a spatial/neurological and temporal set of coordinates that capture the spark, i.e., the moment when a new idea is mentally conjured, and this ideation ground zero should be the focus of creative cognition researchers; (v) once formed, this initial idea is the cause that threads subsequent events that reify it materially into a product deemed to be creative (what Latour [
5], calls a diffusion model of ideas).
In turn, I borrow—and distort for my purposes—the term horizontalisation. In phenomenological research, horizontalisation refers to the equal weighting of a participant’s statement uttered to describe their experience [
36]. From a cognitive ethnography perspective, horizontalisation encourages the equal weighting of all the actants that configure a cognitive ecosystem (as Hutchins [
37] calls it). The analysis does not proceed from an a priori classification of the relative agentic consequences of the heterogenous elements in this system. Heterogeneity does not beget asymmetry. Thus, in mapping out the elements of this system, we should remain alert to how these elements, human and non-human, dynamically co-constitute themselves over time and space as the system is transformed through the interactions among them.
Efforts to mobilise the complexity of such cognitive ecosystems under laboratory conditions inevitably involves a drastic reduction in scale in terms of the ensemble of elements that configure these systems, as well as in terms of the temporal and spatial dimensions along which they interact and associate. A laboratory-based science bets on this scalar simplification to observe a phenomenon more clearly, to understand the parameters within which it manifests, with the aim of tracing and controlling its expression [
38]. As I have argued ([
3,
6,
39,
40]), an insight problem-solving procedure offers an interesting laboratory setting to observe and record how elements in a narrow cognitive ecosystem dynamically configure themselves to promote the emergence of a new idea, which is, in this instance, the solution to a vexing problem. Problems used with this laboratory preparation often have a normatively correct answer; this normativity is not given a priori with ‘real world’ problems; indeed, that normativity is discovered. Still, it is the normative metric that enables researchers to unambiguously assess whether participants have abandoned the incorrect interpretation and discovered a more productive way of working on the problem. The procedure uses so-called insight problems: these come in various guises [
41]; some are verbal riddles (or ‘stumpers
3’ [
42]), some are visuo-spatial problems (e.g., the triangle of coins
4 [
43]; the eight-coin problems
5 [
44]), and some are hybrid problems involving numerical and spatial features (e.g., the 17-animal problem
6 [
45]) or matchstick arithmetic problems
7 (e.g., [
46]). In all instances, the problems are designed to cue an initial, intuitive, but incorrect interpretation, which, in turn, triggers an unproductive strategy to solve it. The 17-animal problem masquerades as a simple arithmetic problem, but participants are soon confronted with the impossibility of dividing an odd number into four odd ones; the new idea that can unlock the solution is about the spatial arrangements of the enclosures. In matchstick arithmetic problems, participants invariably seek to move a stick from an operand (a number) to create a new operand. Given the expression | = || + ||, this strategy will not work. The new idea that might unlock the solution is to decompose an operator to create | = ||| − || (and, having said this, people may entertain the idea of decomposing an operator and, specifically, the plus sign but still cannot ‘see’ on the basis of mental simulation alone how this might result in a solution; what they need is to create a solution prototype by physically moving the sticks [
6]. The procedure is thus useful because, if properly instrumentalised, it can capture the transition from an unproductive knowledge state to one where the solution takes shape. In other words, the procedure can capture the emergence of a new creative idea that solves the problem. Cognitive psychologists (e.g., [
47]) are fond of calling this transition ‘restructuring’. In their account, what is restructured is a mental representation. To be sure, once an insight problem is solved, for example, when enclosures overlap and some animals are double counted (in the 17-animal problem) or an operator is turned into a minus (in the matchstick arithmetic problem described above), participants have abandoned their initially incorrect strategy to solve the problem and adopted a new one. To say that their representation of the problem is restructured is to describe the outcome of a process, not necessarily its antecedent cause [
48].
First- and Second-Order Procedures
My earlier reflections on the importance of prototyping forefront the dynamic interplay between making and ideation in a processual arc that terminates in an innovative idea/product. Retrospective narratives of innovation tend to be written in terms of the diffusion of an initial idea, but in the more granular tracing of innovation, and despite the ontological cleft that separates humans and non-humans, idea and objects are inseparable conversational partners, and with any conversation, turn taking is deeply contingent and non-linear. And yet, it is interesting to examine the procedure commonly employed by insight researchers. Let us take the procedure designed for the influential studies reported in Knoblich et al. [
46], and used by many others subsequently (e.g., [
49]). Here, matchstick arithmetic problems are presented to participants on a computer screen. Participants stare at the screen until they announce a solution or until a set period elapses (300 s in Knoblich et al.). The matchstick problems are static images displayed on the computer; they are not made of actual or virtual matchsticks that can be manipulated and re-arranged to create solution prototypes that can be inspected and from which new solution prototypes can be constructed. The only processes that can explain the discovery of the solution are mental (the resulting account is inescapably cognitivist, not systemic): participants rehearse strategies and simulate matchstick movements to test candidate solutions in their mind. Vallée-Tourangeau and March [
50] term this procedure
second-order because there is no material engagement; participants cannot think with and through the world (to adapt Malafouris [
51]). Systemic resources in which cognition is situated outside the cognitive psychologist’s laboratory are compressed and flattened along a mental plane. The procedure can only perform a mental explanation of creative problem-solving. For example, abandoning a strategy to transform operands in favour of one that deconstructs operators (e.g., to solve a problem such as | = || + ||) can only be evinced mentally. A mental representation is said to be restructured through representational change [
52,
53]. Notwithstanding the tautological whiff to this explanation, theoretical efforts are geared toward understanding the mental heuristics/strategies and unconscious processes (e.g., patterns of activation coursing through semantic memory) that result in a restructured representation from which the solution can be conceived.
The relative contribution of conscious deliberate analysis and unconscious processes has been the focus of a long-raging debate between two factions, those who support the so-called business-as-usual view of insight problem-solving (e.g., [
54]) and those who support the so-called special (as in ‘non-routine’) processes view (e.g., [
47,
55]; see [
39] for a review). Neither faction, however, pays much attention to the role of objects in creative problem-solving. Certainly, by adopting a second-order procedure, objects are absent, and hence interactivity is not possible; the same is true for the construction of solution prototypes, and hence the procedure is simply blind to the dialogue with objects that takes place in so many creative problem-solving contexts beyond the laboratory.
As I have advocated (e.g., [
6]), a first-order procedure, where participants can think with and through the world, offers a much more fruitful platform to record and examine creative problem-solving. Insight problems are still employed, since they are useful in tracing the transition from impasse to breakthrough. A first-order procedure materially reifies this transition, with benefits both for the participant engaged in the problem-solving task and for the researcher. There is no need here for neuroimaging to identify the neural correlates of insight. Rather, with a properly instrumentalised procedure, the researcher can see, literally, how changes to the physical model of the solution co-determine the changes in the participant’s ideation or mental representation of the solution.
Let me illustrate concretely how a study on creative problem-solving unfolds from a systemic perspective and the different solution processes that it reveals. In doing so, I will use the recent data reported in [
6]. In this experiment, participants were invited to solve a series of matchstick arithmetic problems where they could interact with the features of the problem to create prototype models of the solution. The problems are presented on a computer (on a set of PowerPoint slides shown in edit mode), and the session is screen captured to provide a video recording of the participants’ actions and the resulting changes to the solution prototypes. In addition, participants are trained to provide a running commentary on their thoughts, hunches, and strategies as they work on the problem. The resulting audio–video recording provides three data streams: (i) what participants say and when, (ii) the nature and timing of their action, and (iii) the dynamic change to the solution prototype. The videos are analysed with ELAN (
https://archive.mpi.nl/tla/elan (Accessed on 30 October 2024); Max Planck Institute for Psycholinguistics, The Language Archive, Nijmegen, The Netherlands; see also. [
56]), a coding platform that affords the granular temporal juxtaposition of these concurrent data streams. The qualitative coding of these videos helps identify three distinct solution processes: analysis, insight, and outsight. The first two are the traditional binary options that have been the focus of the debate in insight problem-solving [
48].
An analysis solution process is revealed in the video data when a participant’s verbal protocol clearly indicates how hunches guide the quasi-systematic exploration of different solution prototypes. Changes to the prototype solutions are anticipated and guided by the participant’s hypothesis. Clearly, the exploration is made simpler by interactivity, since the participant can see rather than mentally simulate the change in the arithmetic expression, but a solution process is classified as analysis due to the hypothesis-driven nature of the exploration; 35% of the solution processes could be classified in this manner. In turn, insight solutions, the so-called pure insight sequence—impasse then sudden breakthrough—was relatively rare (11%). A solution process was classified as insight on the basis of four coding criteria, namely, (i) the solution was announced suddenly, (ii) the solution announced was diametrically different from the immediately preceding hypothesis as evidenced in the verbal protocol, (iii) the solution was not cued by a change in the solution prototype, and finally (iv) the solution announcement was marked by emphatic expressions of joy and relief (the phenomenological markers of “aha!” [
47]).
The most frequent process, observed for 54% of the correctly solved problems, was outsight, and this solution process can only be captured using an interactive, systemic, problem-solving procedure. Outsight is experienced and driven by the information in the object constructed. It differs from analysis, and this difference is revealed by the coding granularity offered by ELAN. Here, a participant is much less guided by distinct hypotheses, and the consequences of the transformation of the solution prototype are not anticipated until they are revealed by the appearance of the object constructed. In the video data, I encountered two types of outsight processes. The first is the post hoc type. Here, a solution prototype is constructed, not guided by a specific strategy but rather driven by a mix of aimless and playful exploration. Once constructed, after a short pause inspecting the prototype, participants are surprised (and relieved) that they have stumbled upon the correct matchstick configuration that corresponds to the solution of the problem. The outsight phenomenology is not dissimilar to the insight one and is entirely driven by the physical transformation and appraisal of the resulting matchstick configuration. The second type of outsight is called enacted. Here, again, the granular temporal juxtaposition of verbal protocol with actions and material transformation offered by ELAN plays a crucial role in revealing the phenomenon. An enacted outsight solution is announced during the movement of a matchstick, but the consequence of this movement is not anticipated by the participant before the movement is initiated. Rather, it is in the process of moving the stick and as the physical configuration of the prototype changes that the participants announce the solution.
5. A Double Process of Becoming
A laboratory-based insight problem-solving platform offers an interesting vista from which to observe the origins of novel objects-ideas, although the procedure needs to be designed to afford interacting with the problem elements, such as to construct solution prototypes. The procedure also needs to be properly instrumentalised to yield data that capture a double process of becoming, that is, the co-evolution and co-determination of ideation and physical prototyping. As described above, the construction of the solution and the construction of the prototype that embodies it can only be qualitatively captured through the granular coding of video data, for example, with the help of the ELAN interface, as illustrated in [
6]. Solution prototypes are important external artefacts in creative problem-solving. They scaffold memory since they act as a transactive memory device (cf. [
57]), embodying past efforts and current strategies. An interactive problem-solving procedure helps us map the extended nature of creative cognition. But the story that unfolds in the data is not simply one of extension or augmentation, it is a story of transformation, akin perhaps to Malafouris’s [
58] notion of enactive signification in that meaning or ideation is enacted through the construction of solution prototypes. Outsight illustrates how objects and ideas (or the people who articulate them) co-evolve through their dynamic entanglement; it reveals a double process of becoming. Solution prototypes, along their dynamic waystations, are boundary objects that bridge ideation and the next prototyping iteration into an epistemic state that coincides with the normative configuration (for participants who solve the problem). This is not to say that objects and interactivity are necessary in this simple problem-solving procedure. People can solve matchstick arithmetic problems in their heads, formulating strategies and simulating movements mentally. As Clark ([
59], p. 24) puts it: “[W]e often do lots of stuff entirely in our heads, using inner surrogates for absent states of affairs. But it is surely worth noticing just how much of our cognitive activity is not like that; brains like ours will go to extraordinary length to avoid having to resort to fully environmentally detached reflection”. Indeed, solution rates for matchstick arithmetic without interactivity tend to be lower, unsurprisingly [
6,
60]). But the story here is not simply one of greater ecological transparency or legitimacy. An interactive problem-solving procedure reveals the systemic character of creative cognition, restores the role of objects in thinking, and problematises in a productive manner their agentic properties. Efforts to understand creative problem-solving are on a much richer and surer footing with interactivity.
A cognitivist approach explains creativity from an ideation ground zero and assumes a diffusion model of ideas. In such a model, the explanandum is an initial idea, formed at a given moment in time, a position that implicitly promotes creative exceptionalism (to explain so-called Big-C creativity compared to little-c creativity) and the concomitant quest to discover the equally exceptional neural substate that ‘explains’ it. A systemic perspective, however, promotes a translation model of ideas that proceeds on the basis of interactivity and prototyping. In this model, the explanandum is the resulting dialogue between people and prototypes, treated symmetrically as actants in a system of creation.