

















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The AGENCY model, which describes the low-road mechanism for attributing conscious states to others. The model suggests that we are disposed to attribute conscious states to entities if we categorize them as agents, and proposes that people will not have an immediate intuitive inclination to attribute conscious states to non-agent entities such as trees, clouds, cars, and rivers. The document also presents experimental results supporting the AGENCY model.
Typology: Slides
1 / 25
This page cannot be seen from the preview
Don't miss anything!
Dual-process cognition and the explanatory gap*
Brian Fiala, Adam Arico, and Shaun Nichols
University of Arizona
_________________________________________________
Abstract:
Consciousness often presents itself as a problem for materialists because no matter which physical explanation we consider, there seems to remain something about conscious experience that hasn't been fully explained. This gives rise to an apparent explanatory gap. The explanatory gulf between the physical and the conscious is reflected in the broader population, in which dualistic intuitions abound. Drawing on recent empirical evidence, this essay presents a dual-process cognitive model of consciousness attribution. This dual-process model, we suggest, provides an important part of the explanation for why dualism is so attractive and the explanatory gap so vexing.
_________________________________________________
1. The Explanatory Gap
Perhaps the most broad and unassuming philosophical question about consciousness is “What is the relationship between consciousness and the physical world?” It is prima facie difficult to see how the pains, itches, and tickles of phenomenal consciousness could fit into a world populated exclusively by particles, fields, forces, and other denizens of fundamental physics. But this appears to be just what physicalism requires. How could a thinking, experiencing mind be a purely physical thing?
One approach to this problem emphasizes our epistemic situation with respect to consciousness, and especially the distinctively explanatory situation. Epistemic approaches focus on whether we can acquire knowledge, justified belief, or an adequate explanation regarding the nature of consciousness. Thomas Huxley famously gestures at this aspect of the problem of consciousness:
But what consciousness is, we know not; and how it is that anything so remarkable as a state of consciousness comes about as a result of irritating nervous tissue, is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp. (Huxley 1866, 193)
the manuscript: Sara Bernstein, Mark Collard, Jonathan Cohen, Chris Hitchcock, Terry Horgan, Bryce Huebner, Chris Kahn, Josh Knobe, Uriah Kriegel, Edouard Machery, Ron Mallon, J. Brendan Ritchie, Philip Robbins, Ted Slingerland, and Josh Weisberg.
Huxley’s suggestion is that no account can be given of the relationship between consciousness and the brain, where an “account” amounts to something like ‘an adequate scientific explanation’. Huxley’s skepticism about the prospects for a physicalist account of consciousness drives him to the view called “epiphenomenalist dualism,” according to which consciousness is not itself a physical phenomenon and has no causal impact on any physical phenomena, but is nonetheless (and rather mysteriously) caused by physical phenomena (1874/2002).
More recently, Levine has introduced an updated version of this problem, which he dubs “the explanatory gap.” According to Levine, “psycho-physical identity statements leave a significant explanatory gap ” (1983). That is, theories that attempt to explain consciousness by identifying it with some physical property or process will inevitably seem to leave out something important about consciousness. Specifically, what’s supposed to be left out is the felt quality of what it’s like to undergo a conscious experience such as seeing the color red. Since it appears inevitable that purely physical theories will ‘leave something out,’ Levine suggests that there is a serious worry that such theories will inevitably fail to adequately explain consciousness. Levine elaborates on this suggestion by claiming that there is an air of “felt contingency” about the relationship between consciousness and processes in the brain (and indeed, between consciousness and any physical process). That is, there seems to be something contingent or arbitrary about any purported connections between physical processes and conscious states. But good explanations are not arbitrary.^1 As a result, it is hard to see how a theory invoking ‘mere’ brain activity could be a complete explanation of consciousness. Levine concludes, along with Huxley, that the explanatory gap is a serious obstacle for physicalism.
One prominent argumentative strategy at this juncture is to draw on this apparent epistemic obstacle as support for conclusions about the nature of consciousness. For example, one might take the explanatory gap(s) discussed by Huxley and Levine as indicative of a corresponding duality in nature. If no physical theory can fully explain consciousness, it seems doubtful that consciousness is something physical. For if something is not fully physically explicable then it is not a completely physical phenomenon. Therefore, consciousness must not be a physical phenomenon. This formulation of the argumentative strategy is overly simple, but it serves to illustrate the strategy of arguing from epistemic premises to conclusions about the nature of consciousness.
While the explanatory gap is central to contemporary philosophy of mind, it is plausible that the gap gives philosophical expression to a much more pervasive phenomenon – even people without any philosophical training find it bizarre and counterintuitive to think that consciousness is nothing over and above certain processes in the brain. Paul Bloom takes this to be part of a universal inclination towards folk dualism. According to Bloom, people are “dualists who have two ways of looking at the
(^1) Admittedly, this gloss on the issue of modality and reductive explanation crushes many
subtleties. We apologize for this injustice. For reasons that will soon become clear, our primary focus in this paper is on the psychological aspects of the problem of consciousness, rather than on the modal aspects.
produce conflicting outputs with respect to the same cognitive task or subject matter. For instance, consider the following argument:
All unemployed people are poor. Rockefeller is not unemployed. Conclusion: Rockefeller is not poor.
On reading this argument, many people judge incorrectly that the argument is valid. According to dual-process theory, that is because people’s belief in the conclusion biases a system 1 reasoning process to the incorrect answer. However, most people can be brought to appreciate that the argument is not valid, and this is because we also have a system 2 reasoning process that has the resources to evaluate the argument in a consciously controlled, reasoned fashion (see, e.g., Evans 2007). Of course, System 1 and System 2 can (and quite often do) arrive at the same verdict. For instance, changing the first premise of the above argument to “Only unemployed people are poor” allows both systems to converge on the judgment that the argument is valid.
Although the dual-process paradigm provides a tidy picture of the mind, it is unlikely that all mental processes will divide sharply and cleanly into the two categories, such that either a process has all the characteristic features of System 1 or all of the characteristic features of System 2. It would not be surprising, for instance, to find processes that are fast and computationally simple but not associationistic (cf. Fodor 1983). So if we find that a process has one characteristic system 1 feature, it would be hasty to infer that the process has the rest of the characteristic system 1 features. Nonetheless, the dual-process approach is useful so long as one is clear about the particular characteristics of a given psychological process (cf. Samuels, forthcoming; Stanovich and West, 2000; Stanovich, 2004).
We think the distinction between processes that are automatic and processes that are consciously controlled really does capture an important difference between cognitive systems in many domains, including the domain of conscious-state attributions. We suggest (i) that there are two cognitive pathways by which we typically arrive at judgments that something has conscious states, and (ii) that these pathways correspond to a System 1 / System 2 distinction. On the one hand, we propose a “low-road” mechanism for conscious-state attributions that has several characteristic System 1 features: it is fast, domain-specific (i.e., it operates on a restricted range of inputs) and automatic (the mechanism is not under conscious control).^3 On the other hand, there are judgments about conscious states that we reach through rational deliberation, theory application, or conscious reasoning; call this pathway to attributions of conscious states “the high road.”
(^3) We remain neutral on whether the mechanism has other features of System 1, like
being associationistic, evolutionarily old, and computationally simple.
2.2 Conscious attribution: The low road
In an earlier paper (Arico et al. forthcoming), we sketch a model – the AGENCY model – of one path by which we come to attribute conscious states.^4 In our dual- process approach, the AGENCY model describes the “low road” to conscious-state attribution. According to this model, we are disposed to have a gut feeling that an entity has conscious states if and only if we categorize that entity as an AGENT, and typically we are inclined to categorize an entity as an AGENT only when we represent the entity as having certain features. These features will be relatively simple, surface level features, which are members of a restricted set of potential inputs to the low road process. Previous research has identified three features that reliably produce AGENT categorization: that the entity appears to have eyes; that it appears to behave in a contingently interactive manner; and that it displays distinctive (non-inertial) motion trajectories.
We developed the AGENCY model as a natural extension of work in developmental and social psychology. In their landmark study, Fritz Heider and Marianne Simmel (1944) showed participants an animation of geometric shapes (triangles, squares, circles) moving in non-inertial trajectories. When participants were asked to describe what was happening on the screen, they tended to utilize mental state terms—such as “chasing”, “wanting”, and “trying”—in their descriptions of the animation. This suggests that certain types of distinctive motion trigger us to attribute mentality to an entity, even when the entity is a mere geometric figure.
More recently, developmental psychologists Johnson, Slaughter, and Carey (1998) presented 12-month-olds with various novel objects, one of which was a “fuzzy brown object”. Johnson, et al. found that when the fuzzy brown object included eyes, infants displayed significantly more gaze-following behavior than when the fuzzy brown object did not include eyes. They also found that infants displayed the same gaze- following behavior when the fuzzy brown object, controlled remotely, moved around and made noise in apparent response to the infant’s behavior. Johnson and colleagues explain these effects by suggesting that when an entity has eyes or exhibits contingent interaction, infants (and adults) will categorize the entity as an agent. Once an entity is categorized as an agent, this generates the disposition to attribute mental states to the entity, which manifests in a variety of ways, including gaze following, imitation, and anticipating goal-directed behavior. Figure 1 depicts the model of mental state attribution suggested by these studies.^5
(^4) This model was developed in the wake of recent work on the folk psychology of
consciousness (Gray et al. 2007, Knobe & Prinz 2008, Sytsma & Machery 2009). As with the other work in the area, our model focuses on attributions of conscious states to others. But it’s possible that a quite different mechanism is required to explain attributions of conscious states to oneself. (^5) Similar results have been found by Shimizu & Johnson, who built upon a finding by
Amanda Woodward. Woodward (1998) had infants watch a human arm reach for an object, and then moved the object to a different location; when the arm reached, again, to the original location of the object, rather than to the object in its new location, infants
Simmel.^6 In addition, the model predicts that people will be automatically inclined to attribute conscious states to the kinds of entities that have the superficial cues.
This is precisely what we found in a reaction time experiment (Arico et al. forthcoming). One characteristic of dual process models, including ours, is that the low road system is supposed to be very fast; by contrast the high road system, which draws on a broader information base, is comparatively slow. In a reaction time paradigm under speeded conditions, low-road processing should occur quickly and automatically, with high-road processes lagging behind. Given this standard interpretation of response times, the AGENCY model predicts slower reaction times when participants deny conscious state attributions to objects that are typically classified as AGENTS (as compared broadly to non-AGENTS). The idea is that if someone were to overtly respond that entities categorized as AGENTS don’t feel pain (e.g. because they lack appropriate neural structures), this would require overcoming the hypothesized low-road disposition to attribute conscious states to those entities, which would take some extra time. To test our model, we presented subjects with a sequence of Object/Attribution pairs (e.g., ant / feels pain ), and the subjects were asked to respond as quickly as possible (Yes or No) whether the object had the attribute. Attributions included properties like “Feels Happy” and “Feels Pain”, while objects included various mammals, birds, insects, plants, vehicles, and natural objects. We recorded both participants’ overt judgments and the time taken to answer each item. We found that participants quickly affirmed attributions of consciousness for those objects that typically possess the relevant features (mammals, birds, insects), while they responded negatively in response to attributions of consciousness to things that typically lack those features (vehicles, inanimate moving objects like clouds and rivers). More importantly, the reaction time results confirmed the predictions entailed by the AGENCY model of low-road consciousness attributions. Participants responded significantly more slowly when they denied conscious states to objects that do have the superficial AGENCY cues, namely, insects. This result is neatly explained by our hypothesis that insects automatically activate the low road to consciousness attribution; in order to deny that insects have conscious states, subjects had to “override” the low-road output, which explains why reaction times are slower in such cases.^7
These experiments provide support for the AGENCY model of low-road consciousness attribution. However, there are numerous ways that this low-road process might be triggered, and many of the details of this process are largely underdetermined by the existing data. Nonetheless, the data corroborate our proposal
(^6) Of course, anthropomorphized cartoon versions of such objects may well induce an
immediate inclination to attribute conscious states. On our account, the natural explanation is that cartoons induce consciousness-attribution precisely because they have the right kinds of triggering featural cues. (^7) Why do people override the low road at all? Why not accept the gut feeling that a
spider (for example) can feel pain? People might override because of known facts about arachnid neuroanatomy: for example, that spiders lack nociceptors. Another possibility is that people override because of socially acquired ‘scripts’ about spiders: for example, “ Of course spiders don’t feel pain!”
that low-road attributions of conscious states are generated by an AGENT mechanism that is triggered by a restricted ran
Figure 3: The AGENCY model of
2.3 Conscious attribution: The high road
Thus far we have focused the discussion on one pathway for attributing conscious states. The low-road mechanism is not, however, the only states. One might instead rely on deliberation and inferential reasoning to conclude that an entity satisfies the criteria laid out by some scientific theory and, thus, judge th (probably) has conscious states. For instance, if a trusted source tells Uriah that a certain organism has a sophisticated neural system (including nociceptors and dorsolateral prefrontal cortex), and if Uriah relies on a rudimentary theory of pain processing, then he might infer that the organism can probably feel pain. Or if a reliable informant tells Terry that some entity is conscious, Terry might conclude from that testimony that the entity is, in fact, a conscious thing. What matters is that one come to think that an entity has conscious states via a pathway that has features typically associated with System 2: processing that is domain introspectively accessible. The process is domain general in that the inputs are not restricted – evidence can potentially come from anywhere. The process is voluntary because we can control when reasoning starts and stops. And it is introspectively accessible because the steps of the inferential process are typically available to consciousness. Let us examine each of these three features in a bit more detail.
Like most reasoning, high draw on an immense supply of information for evidence regarding an entity’s having conscious states. Potential resources include the individual’s current perceptual state, background beliefs, memories, and testimony from trusted sources. From there, the high-road reasoning process is only constrained by whatever rules of inference the individual has internalized.^8
High-road attributions of consciousness are voluntary actions in the same sense that many conclusions reached via deliberate inferences are voluntary. Such conclusions
(^8) Of course, limitations of working memory, time constraints, and motivational factors
will also have some impact on the proce
road attributions of conscious states are generated by an AGENT mechanism that is triggered by a restricted range of inputs. (See Figure 3).
Figure 3: The AGENCY model of the low-road path to attributions of conscious states.
Conscious attribution: The high road
Thus far we have focused the discussion on one pathway for attributing conscious states. road mechanism is not, however, the only pathway for attributing conscious states. One might instead rely on deliberation and inferential reasoning to conclude that an entity satisfies the criteria laid out by some scientific theory and, thus, judge th (probably) has conscious states. For instance, if a trusted source tells Uriah that a certain organism has a sophisticated neural system (including nociceptors and dorsolateral prefrontal cortex), and if Uriah relies on a rudimentary theory of pain rocessing, then he might infer that the organism can probably feel pain. Or if a reliable informant tells Terry that some entity is conscious, Terry might conclude from that testimony that the entity is, in fact, a conscious thing. What matters is that one come to think that an entity has conscious states via a pathway that has features typically associated with System 2: processing that is domain-general, voluntary and introspectively accessible. The process is domain general in that the inputs are not evidence can potentially come from anywhere. The process is voluntary because we can control when reasoning starts and stops. And it is introspectively accessible because the steps of the inferential process are typically available to ousness. Let us examine each of these three features in a bit more detail.
Like most reasoning, high-road attributions of consciousness can potentially draw on an immense supply of information for evidence regarding an entity’s having Potential resources include the individual’s current perceptual state, background beliefs, memories, and testimony from trusted sources. From there, the road reasoning process is only constrained by whatever rules of inference the
road attributions of consciousness are voluntary actions in the same sense that many conclusions reached via deliberate inferences are voluntary. Such conclusions
Of course, limitations of working memory, time constraints, and motivational factors will also have some impact on the process.
road attributions of conscious states are generated by an AGENT mechanism
road path to attributions of conscious states.
Thus far we have focused the discussion on one pathway for attributing conscious states. pathway for attributing conscious states. One might instead rely on deliberation and inferential reasoning to conclude that an entity satisfies the criteria laid out by some scientific theory and, thus, judge that it (probably) has conscious states. For instance, if a trusted source tells Uriah that a certain organism has a sophisticated neural system (including nociceptors and dorsolateral prefrontal cortex), and if Uriah relies on a rudimentary theory of pain rocessing, then he might infer that the organism can probably feel pain. Or if a reliable informant tells Terry that some entity is conscious, Terry might conclude from that testimony that the entity is, in fact, a conscious thing. What matters is that one can come to think that an entity has conscious states via a pathway that has features general, voluntary and introspectively accessible. The process is domain general in that the inputs are not evidence can potentially come from anywhere. The process is voluntary because we can control when reasoning starts and stops. And it is introspectively accessible because the steps of the inferential process are typically available to ousness. Let us examine each of these three features in a bit more detail.
road attributions of consciousness can potentially draw on an immense supply of information for evidence regarding an entity’s having Potential resources include the individual’s current perceptual state, background beliefs, memories, and testimony from trusted sources. From there, the road reasoning process is only constrained by whatever rules of inference the
road attributions of consciousness are voluntary actions in the same sense that many conclusions reached via deliberate inferences are voluntary. Such conclusions
Of course, limitations of working memory, time constraints, and motivational factors
Although the two processes will often converge, this won’t always be the case. We have already seen one illustration of this. While insects trigger the low road to attributing conscious states, many people explicitly reject the claim that insects can feel pain on the basis of facts about the limitations of insect neural systems. To take a different sort of example, as philosophers worrying about the problem of other minds, one might well come to doubt the philosophical arguments that others are conscious. This would be a case in which the high road to attributing conscious states to others does not yield the conclusion that others are conscious. However, even the skeptic about other minds will still have low-road reactions when humans swirl about him.
3. Dual-Processing and Explanatory Gap Intuitions
How exactly might our dual-process model explain the intuitive force of the explanatory gap? As sketched above, we maintain that third-person mind attribution involves two distinct cognitive processes whose outputs may either comport with or fail to comport with one another. When looking at other people, both of these systems typically produce affirmative outputs: the low road is activated by (at least) one of the other person’s surface features, producing a low-level, intuitive attribution of consciousness. At the same time, we can use the high road to reason deliberately about another entity’s being conscious (for instance, as John did in §2.3). Since the two systems generate the same answer in typical cases, there is typically no resistance to the idea that other people are conscious. However, when we consider the mass of grey matter that composes the human brain (and on which the majority of physicalist reductions of consciousness will focus), the result is altogether different.
Consider Jenny, who is in the grip of physicalism about consciousness. Using high road reasoning, she could apply the hypothesis that consciousness is identical to a certain kind of brain process, in which case Jenny’s high road would produce the output that specific brain processes or brain regions are conscious experiences.^9 For example, Jenny might believe that consciousness is identical to populations of neurons firing in synchrony at a rate between 40Hz and 60Hz; on this basis she could infer (using the high road) that specific brain regions that are firing synchronously are conscious experiences. (Crick & Koch, 1990). If Jenny knew that Jimmy’s brain had regions that were firing synchronously between 40-60Hz, she could infer (using the high road) that Jimmy’s brain states are conscious experiences. But since this description of Jimmy’s brain does not advert to any of the featural cues that trigger AGENCY categorization, Jenny’s low road is not activated, and thus remains silent on whether the synchronously firing neurons are conscious.^10
(^9) We use the example of a “type-identity” theory of consciousness for ease of exposition.
A similar point could be made using “token-identity” (or functionalist) theories, or other sorts of physicalist theories. (^10) Of course, if Jenny were to view a picture of Jimmy (or Jimmy himself), her low road
would be activated by the presence of the relevant featural cues, and she would be disposed to attribute conscious states to Jimmy. But saying that Jimmy (the person) activates Jenny’s low road is very different from saying that Jimmy’s brain activates Jenny’s low road.
This example, we think, helps to illuminate why physicalist explanations of consciousness leave us feeling as if something has been left out: our low-level, low road process remains silent where it would normally provide intuitive confirmation of our high road output.^11 In place of the harmony between systems that we typically experience when looking at other people (or any other mammal, for that matter), discussions of neurons, neurotransmitters, and so on create a disparity between the two systems, which in turn produces a feeling that the characterization is somehow incomplete.^12 This, we think, is an important part of the explanation for why dualism is so attractive and the explanatory gap is so vexing.^13
(^11) Of course, it happens quite often that high-road representations are not accompanied
by any corresponding low-road representations. For example, I might use the high road to reason to the conclusion that e = mc^2 , but there would be no corresponding low-road representations of energy, mass, or the speed of light. (Thanks to Josh Weisberg for the example). Does our theory predict a kind of gap in this case? No. Our theory only predicts these intuitions for cases in which the underlying cognitive architecture is configured for dual processing. In such cases, both high-road and low-road representations play a role in controlling behavior and inference. In cases that only involve system 2 processing, system 2 is free to control inference and behavior unfettered. Thus it is only in cases involving dual processing that dissonance between system 1 and system 2 can arise. Thus, the case of consciousness is distinct from cases of pure system 2 reasoning, because (we claim) it does involve dual processing. (^12) Our view here is anticipated in important ways by Philip Robbins and Tony Jack, who
write: “The intuition of a gap concerning phenomenality [i.e., consciousness] stems at least in part from the fact that our brains are configured in such a way as to naturally promote a dualist view of consciousness.” (2006, 75) However, Robbins & Jack’s explanation of the explanatory gap keys on moral capacities. They write, “At the heart of our account of why consciousness seems to defy physical explanation is the idea that thinking about consciousness is essentially linked to, indeed partly constituted by, the capacity for moral cognition” (2006, 75). On our view, while moral cognition might be associated with conscious attribution, the order of explanation would be reversed. The AGENCY system is primitive and not directly a moral capacity. Yet, we suggest, the AGENCY mechanism provides the primitive basis for consciousness attribution. Robbins & Jack discuss an objection to their view that is relevant here. The objection is that their view entails that people who lack basic moral capacities, like psychopaths, should fail to feel the explanatory gap (76-78). Robbins & Jack discuss various ways in which their theory can address the objection, but it’s important to note here that our theory makes no such predictions. We expect that a creature might lack moral capacities while retaining the AGENCY mechanism and the associated attributions of conscious states. (^13) We intend for this explanation to apply specifically to intuitions about the
explanatory gap , as opposed to other puzzling cases involving consciousness. This is worth mentioning because it is quite common for philosophers to advance unified explanations of the explanatory gap, zombie scenarios, the knowledge argument, and so forth. Our ambitions in this paper don’t extend that far. We will be well satisfied if we manage to illuminate the source of the explanatory gap.
case of the explanatory gap, we claim, one of the relevant cognitive processes fails to produce any output, thus leading to the disharmonious sense that the neural description is fundamentally incomplete as an explanation of consciousness.^15
4. Objections & replies
Our proposal, while new, has already met with a number of objections. In this section, we deal with what we take to be the most important of objections we’ve encountered thus far.
4.1. Objection: What About Intentionality?
One natural objection is that if our proposed model predicts an explanatory gap for consciousness, then it must also predict an explanatory gap for “about-ness” or intentionality. In our view, the activation of AGENT leads to attributions of conscious states like pain, and also to intentional states like desires. Because attributions of conscious states and intentional states are supported by the same mechanisms, we should expect an explanatory gap for intentionality. Our model predicts that completely physicalistic explanations of intentionality will fail to trigger AGENT and consequently fail to elicit the normal pattern of gut reactions and intentionality-attributions, for reasons analogous to the case of consciousness. But, the objection continues, this prediction is problematic because while there is an explanatory gap for consciousness, there is none for intentionality. Our model predicts a gap where there is no gap. While consciousness is mysterious and problematic from the standpoint of physicalism, intentionality is relatively easy to locate in the physical world. Or so the objection goes.
For present purposes, we will simply grant the objector the claim that our model predicts that there should be an explanatory gap for some attributions of intentional states. However, it doesn’t follow that all attributions of apparently intentional states will give rise to an explanatory gap. People routinely attribute apparently intentional states, such as memory and knowledge, to computers (cf. Robbins & Jack 2006, 78-79). For instance, it’s perfectly natural to say that a chess program knows that the queen will be lost if it moves the pawn. More simply, it is familiar to say that that hard disks and flash drives have memory. These attributions do not come with any air of explanatory mystery. It’s possible that we sometimes apply such computationally domesticated intentional attributions to humans as well. Nonetheless, this hardly excludes the possibility that some intentional attributions do indeed invite an explanatory gap. In fact, in one of the earliest apparent expressions of the explanatory gap, Leibniz seems to articulate an explanatory gap that folds the intentional and the conscious together:
(^15) A key difference between the Capgras delusion and the explanatory gap involves the
nature of the underlying processes. Our model appeals to standard dual-process architecture to explain the gap, whereas in Capgras, neither the morphological system nor the affective system is akin to System 2. But that doesn’t diminish the thrust of the analogy. The critical point is that independent systems are involved, and they produce disparate outputs about the target domain where harmonious outputs are the norm.
If we imagine that there is a machine whose structure makes it think, sense, and have perceptions , we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill. Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a perception. (Leibniz, 1714/1989, sec. 17, emphasis added)
Nor is this view merely a curiosity of the 18th^ century. A number of prominent contemporary philosophers have quite explicitly defended an explanatory gap for
very much a live philosophical question whether there is an explanatory gap for intentionality, we think the intentionality objection is far from decisive.^16
4.2. Objection: The proposal mislocates the gap, part 1: Phenomenal concepts
It might be objected that our account doesn’t illuminate the explanatory gap because the gap is really driven by the difference between the first-person properties that are involved in conscious experience, and the third-person properties adverted to by scientific theories of conscious experience.^17 To explain this objection we first need to review quickly how the apparent alternative goes. A property dualist might maintain that even if mental processes (or events, or things) are identical to physical processes (or events, or things), there still seems to be a distinctive class of mental properties that objective science cannot explain. Specifically, the subjective and qualitative properties of conscious experience seem to resist scientific explanation and reduction to the physical. Relatedly, physicalists (who reject the existence of inexplicable and irreducible subjective properties) may propose something similar at the level of concepts. Such physicalists hypothesize that we possess certain concepts – phenomenal concepts – that systematically fail to accord with the concepts deployed in objective physical science.^18 There is, of course, considerable disagreement about the precise nature of phenomenal concepts, and hence about the precise way in which phenomenal concepts fail to accord with physical concepts. Some theorists maintain that phenomenal concepts are recognitional concepts (Loar 1990/1997; Tye 2003); others maintain that they are quotational concepts (Block 2006; Papineau 2006); still others maintain that they are
(^16) Many have thought that consciousness is the feature left out of reductive accounts of
belief (e.g. Kriegel 2003, Searle 1991). This is, of course, consistent with the AGENCY model since that model proposes that identifying an entity as an AGENT will incline us to attribute both beliefs and conscious states. (^17) In his (1959), J.J.C. Smart attributes this objection to Max Black. Ned Block explicates
and responds to this objection in his (2006). (^18) On many accounts of phenomenal concepts, the failure is supposed to be that no
conclusion conceived under exclusively phenomenal concepts can be inferred a priori from any set of premises conceived under exclusively non-phenomenal concepts. The precise nature of the failure (for example, the reason why the relevant a priori inferences are supposed to fail) will depend upon the precise nature of phenomenal concepts.
(despite its present popularity), the AGENCY model can still contribute to a psychological explanation of the intuitive force of the explanatory gap. Thus, the AGENCY model is consistent with the phenomenal concept strategy, and it might be developed as a version of the strategy. But the AGENCY model is not hostage to the strategy.
4.3. Objection: The proposal mislocates the gap, part 2: The first person perspective
A related objection is that the source of the gap involves a difference between self- attributions and other-attributions of consciousness. The idea is that I appreciate the qualitative aspect of my pain in my own case , and no scientific description can provide a satisfying explanation of my pain experience. So, the problem gets off the ground because of something about self-attributions specifically. Since our proposal focuses primarily on other-attributions, it completely misses the problem of the explanatory gap.
Of course we agree that the explanatory gap can be made salient from the first- person perspective by focusing on one’s own experiences. However, it would be somewhat myopic to think that the gap essentially involves first-person (or self- attributive) cases. For an explanatory gap presents itself even when we restrict our focus to third-person attributions (i.e. other-attributions). People find it quite intuitive to attribute consciousness to many third parties, including dogs, birds, and ants. Setting aside philosophers in their skeptical moods, people rarely look at horses, cats, or humans and think “How could that thing be conscious?” On the contrary, it is virtually automatic that we judge such organisms to have conscious states. However, just as when we reflect on our own conscious states, a “gappy” intuition surfaces when we turn to specific kinds of third-person characterizations of consciousness, namely scientific descriptions. People are happy to credit consciousness to cats, but it is counterintuitive that cat-consciousness is ultimately nothing more than populations of neurons firing synchronously at 40-60Hz. That is where our proposal enters the picture. We claim that the gap arises in part because such scientific descriptions do not trigger the low road to consciousness attribution.
Of course, we find a parallel situation when we focus solely on self-attributions of consciousness. When I compare my own conscious experience with scientific descriptions of my own brain, the neural features do not seem to fully explain my conscious experience; and they certainly don’t seem to be my conscious experience. This intuition is generated (we suggest) because the neural description activates the high road but not the low road. By contrast, we don’t get a ‘gappy’ intuition when viewing our own image in a mirror. We don’t think, “Sure I’m conscious, but how can that thing in the mirror be conscious?” This, we submit, is because the mirror image does suffice to activate the low road to consciousness attribution. So the difference between self- attributions and other-attributions cannot by itself explain our ‘gappy’ intuitions about consciousness. Instead, the explanatory gap emerges at least in part from the contrast between cases in which there is intuitive support from the low road, and cases in which there is not intuitive support from the low road.
4.4. Objection: The Proposal is Overly General
We have argued that part of the explanation for the explanatory gap is that our gut-level feelings that an entity has conscious states are driven by a low-road process that is insensitive to the kinds of features that we find elaborated in neuro-functional descriptions of the brain. If that’s right, then we should expect to find something similar to the explanatory gap in other domains, because dual-process architecture is supposed to be implicated in many domains. But, the objection goes, these expectations go unsatisfied because the explanatory gap phenomenon is restricted to the domain of conscious experience.
One response to this objection is that for all we’ve said here, consciousness might be the only philosophically important domain in which an explanatory gap obtains. It’s certainly possible that the cognitive systems underlying other philosophically important domains do not employ the kind of dual-process architecture that we think drives explanatory gap intuitions. It’s also possible that such systems do have a dual-process architecture, yet never produce ‘gappy’ intuitions because dual-process architecture is not sufficient for generating an explanatory gap. After all, in some cases, the two systems might produce harmonious outputs, rather than the disharmony we find in certain attributions of consciousness. So even if our dual-process account is right for the explanatory gap for consciousness, it might turn out to be singular.
That said, we rather suspect that something like the explanatory gap phenomenon does show up in other cases where we try to reductively analyze intuitive notions. Take causation, for instance. There is good reason to think that we have a low- road process that generates gut-level intuitions about causation. Infancy research suggests that babies are especially sensitive to cues like contact (Leslie & Keeble 1987). Seeing a stationary object launch after being contacted by another object generates a powerful and intuitive sense of causation. Work on adults brings this out vividly. In a classic experiment, Schlottman & Shanks (1992) showed adult subjects computer- generated visual scenes with two brightly colored squares, A and B. The participants were told that the computer might be programmed so that movement would only occur if there was a color change; participants were told to figure out whether this pattern held. In the critical condition, every movement was indeed preceded immediately by a color change. On half of the scenes, there was no ‘contact’ between A and B, but B would change color and then move; on the other half of the scenes, there was contact between A and B just before B changed color and then moved. Importantly, the covariation evidence indicates that color change was necessary for motion. And indeed, the participants’ explicit judgments reflected an appreciation of this. But these explicit judgments had no discernable effect on their answers to the questions about perceived causation in launching events, viz., “does it really seem as if the first object caused the second one to move? Or does it look more as if the second object moved on its own, independent of the first object’s approach” (Schlottman & Shanks 1992, 335). Only when there was contact did people tend to say that it “really seemed” as if the first object caused the second object to move. This gut-level sense of causation seems to be driven by a low-road system that is insensitive to covariation information.
Our present proposal might play a significant role in filling out such an argument by offering a more detailed empirical account of the psychological mechanisms that drive our intuitive resistance to physicalism. To determine how much philosophical weight we should give to our intuitive resistance to physicalism, we would do well to know a good deal about the psychological basis for that resistance. Our proposal is that the resistance is caused partly by the fact that the low-road mechanism will not render a confirmatory gut-feeling to our considered reasons for thinking that conscious states are brain states. A further question is whether we should take that low-road system to carry any epistemic weight, and if so how much weight. Answering this question involves confronting difficult epistemic issues, and we won’t presume to do them justice here. But at a minimum, we think there is reason to take a skeptical stance toward the low road’s epistemic credentials.
One suggestion is that we should discount the low-road system simply because it is relatively crude and inflexible. By contrast, our reasoned judgments about consciousness are highly flexible and general, and might be thought to be more trustworthy than the low-road mechanism because they take more information into account.^21 This kind of consideration is clearly not decisive, however, because it’s plausible that we are often justified in trusting the outputs of relatively crude and inflexible cognitive systems (low-level vision, for example).
Another possibility is that this particular low-road mechanism is untrustworthy, even if there is little reason to doubt the outputs of low-road mechanisms in general. It is highly plausible that a low-road mechanism for detecting other minds (and other conscious minds) would be subject to a high rate of false positives. Considerations from signal-detection theory and evolutionary psychology support this claim. Consider, for example, the high cost of a false negative. Failing to detect another conscious agent could have potentially disastrous consequences: a rival human or (worse) a hungry predator could easily get the jump on the poor sap whose low-road mechanism outputs a false negative. Since an easy way of producing fewer false negatives is to produce more false positives, this is what we should expect the mechanism to do. And indeed, it seems plausible that the low-road mechanism does in fact produce many false positives. The Heider-Simmel illusion seems to provide an obvious case in which our intuitive attributions of mentality are misguided: animated cartoons and movies provide a range of similarly clear examples. In these kinds of cases, it is extremely plausible to think that the low-road mechanism has produced inaccurate outputs.
But what about false negatives? False negatives are more directly relevant to the explanatory gap, because (we claim) the gap is a case in which the low-road mechanism is silent. It’s worth noting that even mechanisms with a high rate of false positives may sometimes output false negatives. For example, we might expect a “snake detector” mechanism to have a high rate of false positives, for reasons similar to those given above. But such a mechanism may occasionally fail to detect a snake: the snake might be camouflaged, or irregularly shaped, or seen from a non-standard vantage point. In such
(^21) Haidt (2001) and Greene (2003, 2008) reason along these lines, for the conclusions
that our reasoned moral judgments are more trustworthy than our intuitive moral judgments.
cases the snake-detector would remain silent. Could our proposed low-road mechanism for consciousness-attribution be similar to the snake-detector in this respect? It is difficult to say, because in the case of snakes we can appeal to an independent and relatively uncontroversial standard about which things count as snakes. But in the case of consciousness there is no such independent standard, since there are core philosophical and scientific disputes about the nature and scope of consciousness. So it seems doubtful whether this kind of consideration could yield a decisive reason for saying that the low-road mechanism is untrustworthy in the relevant cases.
Nonetheless, we think there is reason to handle the low-road mechanism’s outputs (or lack thereof) with extreme care. While the low road is routinely triggered by biological organisms, it is rarely or never triggered by the brains of those organisms: and according to some of our best theories, the brain is the part of the organism most crucially responsible for its mind. That is, we have reason to suspect that the low-road mechanism is insensitive to some of the features that are most important for mindedness.
One of the salient features of the low-road AGENCY mechanism is that it is responsive to organisms, and not to particular bits inside of organisms. There is an obvious explanation for this. The low road is, in some fashion, an adaptation to our environment. It might be a domain-specific mechanism that was shaped by evolutionary pressures. Or it might be a developmental adaptation that children achieve through countless interactions with their environment. We take no stand on that issue here. Regardless of which kind of adaptation it is, the AGENCY mechanism was shaped by the environment to which we (or our evolutionary ancestors) were exposed. As a result, it is unsurprising that the mechanism responds to organisms but not to suborganismic bits. We (and our ancestors) interacted most often with entire organisms, not neurons in a petri dish. Once we see the role of the environment in shaping the mechanism, this should lead us to suspect that the low-road mechanism is a relatively shallow and inflexible informant for a theory of consciousness. The mechanism is sensitive only to gross organismic features, but we need not suppose that this is because consciousness only attaches to gross organisms. Rather, the reason the low road mechanism is sensitive to such a restricted set of features is because whole organisms are the parts of the environment that are responsible for shaping the mechanism. Suborganismic features like neuronal firing patterns never had a chance to shape the mechanism, because they are hidden away behind skin and bone. So even if these features are crucially important for consciousness, we should still expect our low-road mechanism to be insensitive to this fact. As a result, when considering explanations of consciousness, there is reason to doubt that we can assign much evidential weight to the fact that the low-road isn’t activated by suborganismic features. The fact that the low-road is silent cannot be taken as significant evidence that consciousness is something other than suborganismic a feature.
Even on the supposition that the proposed low-road mechanism is not to be trusted in the relevant cases, we do not claim to have provided a complete psychological or epistemological account of the explanatory gap. For example, more must be said about the psychology and epistemology of attributions of particular kinds of conscious states (e.g. reddish visual experience versus blueish visual experience). At least one