Saturday, December 17, 2005

The New Cognitive Sciences and Political Theory

I just recently finished reading Daniel Dennett's Consciousness Explained (1991). If you click on the link to the book on Amazon, you may find that some of the reviews are quite hostile (and some, in my opinion, miss the point entirely, but that's another story); but I found the book enjoyable and nicely written for a general audience (with the occasional very bad joke thrown in; there are some disturbing similarities between Dennett's style and Thomas Friedman's).


Dennett's purpose, as the title says quite explicitly, is to explain consciousness (or "explain it away," as some of his critics would argue), or rather, to provide a philosophical framework for such an explanation; and he is conscious (pun intended) that in the process he will likely arouse much hotility, both from regular readers (as some of the Amazon reviews show) and philosophers alike. I came to it with a midly hostile attitude myself ("mildly" because I enjoyed very much Dennett's Darwin's Dangerous Idea, so I thought this earlier work could be worth reading even as I was skeptical of its premises), since I had been persuaded by the types of arguments presented by people like John Searle (of "Chinese Room" fame) and Colin McGinn, (and from a different perspective, Hans Jonas) who argue or would argue that Dennett's project is on its face preposterous since consciousness could not be explained by the kind of "third person" perspective that Dennett takes. But I came away from it, if not convinced, at least far more open to the kind of argument that purports to explain consciousness as a specific effect of the self-organization of human brains attained via evolution, and less convinced of the cogency of Searle's and McGinn's arguments. (It is an interesting fact in itself that this debate over the "explanability" of consciousness has become so polarized - perhaps a sign of a hidden antinomy of reason, in Kantian terms? But that's another story. So many stories, so little time.)

Dennett draws on a wide variety of research in neuroscience, computer science, and other disciplines to make his argument, though this is perhaps not the book to read if you are interested in the "state of the art" in those sciences (it was published in 1991; for the state of the art in neuroscience, you probably have to read something like Christopher Koch's The Quest for Consciousness, which I'd love to read but have not been able to). But the book's strength is as a philosophical argument - ultimately drawing on Darwin, Wittgenstein and Nietzsche, as Dennett makes explicit here and there - to shift the metaphors by which we think about "consciousness," or more generally, "the soul," a program which Dennett continues in his later work, Freedom Evolves (which I am currently reading).

Dennett is not always at his best on the political implications of thinking of consciousness from this new perspective - informed by what I would call, broadly speaking, the "new cognitive sciences" - though that there are such implications seems to me clear. I am less interested in the ultimate correctness of Dennett's theory - there are many competing models of consciousness that draw on these new sciences - than in what they mean for thinking about politics and political theory. I am less clear as to what these implications could be. (So here we come to the point, you say, and then you find out that there is no point).

Dennett, in some ways, is like a more cheerful, more empirically grounded Nietzsche: the new sciences are relentless in the destruction of god, the soul, and all such metaphysical fancies (there are holdouts, to be sure). He does not delight in destroying (unlike Nietzsche), and he does not shy away from erecting new "sacreds" - the tree of life at the end of Darwin's Dangerous Ideas, for example; his good humor can be infectious, and there is a real delight in discovery in his work (I sure learned a lot of weird stuff about human consciousness that really jolted some pre-conceived ideas I had). But after you read him, it's hard to think (speaking as the unreconstructed Platonist I play at being in my work) of the "soul" in the same way as before. What point is there, for example, to the tripartite psychology of book IV of the Republic? One might say that there are political points, but what if that sort of phenomenology used by Socrates there is all wrong - deceives you, in fact? What can it mean to speak of "self-control" or "reason mastering the appetites" if our brains are the way these new sciences say they are, a kind of loosely structured "pandaemonium" shaped by natural selection? There are some answers to these questions - indeed, I could come up with some myself - but I wonder: should we, as political theorists interested in mostly historical approaches, pay any mind to the astounding revolution occurring around us? Do the new cognitive sciences have something to tell us about politics?

Anyway, just thinking out loud and wondering whether others share my perplexities. (A loooong thinking out loud, you might complain. But bloggy things are perhaps useful for this sort of stuff).

4 Comments:

At 11:50 PM, Blogger Xavier Marquez said...

I don't think Dennett is saying that history counts for nothing. In fact, he talks about the emergence of consciousness from the interpaly of genes and memes - biological processes which give rise to human capacities which give rise to new artifacts, including language, which Dennett thinks essential for the more differentiated forms of consciousness (not all forms of consciousness, however. He does not think you can draw a very bright line between the definitely conscious and the definitely not conscious). In his metaphor: consciousness is like a "virtual machine" in the brain, affected by its own productions as well as by biological "hardware."

He might also say that he can account for the "space of reasons" - it's what his theory is supposed to do! - precisely by showing how a space of reasons emerges from spaces that are not of reasons (his style of argument tends to show you how the whole is freer/more intelligent/etc. than its parts: it's the denial of the thesis that causality is intelligible only accross homogeneous realms).

More generally, however, I am wary of saying that one "can't imagine" certain things to be the case. I used to feel that what Dennett was trying to do was unimaginable; but his project is precisely as a philosopher (not a scientist) to make it imaginable, a task in which I think he succeeds beyond what I expected coming in. New metaphors and analogies are what make things imaginable, and Dennett is good at using them.

Wittgenstein says somewhere in the Philosophical Investigations that a problem that philosophers often have is that they have a limited store of examples (I paraphrase); thus, they can't imagine things that are in fact imaginable. More sharply, a failure of imagination is not always a good argument (though it may sometimes be, if the failure involves some logical contradiction, for example). Dennett does a very interesting job in giving a plausible account of human logos - without saying that human beings are utility maximizers or the like. (Language may have evolved for one reason, but the reasons for which one thing evolves are not always the reasons for which it is eventually put to use - organisms are jerry-built, their parts reused and adapted for purposes other than the things they were built for).

On the second comment. I think there are a few reactions to the new cognitive sciences:

a) Their findings are true, and they change everything: morality does not exist, free will is an illusion, etc., etc., etc. (Sort of a Spencerian reaction, or an Ayn Randian or even a vulgar NIetzschean reaction).

b) Their findings are true, but they don't change everything, only some things. (This is, I think, the Dennett argument: we are free beings with moral responsibility, though religion does not make much sense, we don't have substantial selves, and so on. I simplify).

c) Their findings are true, but they change nothing. We can still talk of god, the soul, freedom, etc, and conduct our moral/psychological phenomenology in the ways in which we've always conducted them. I have a sneaking suspicion that our very own Vittorio is in this camp, but I have not read his writings on the subject (there's a book he co-editied, Philosophy and Darwinism, and parts of his magnum opus, devoted to this theme. Perhaps the Hosleians out there can enlighten me on this?).

4) Their findings are true in some sense, but not wholly, or even if they were true they are dangerous and should be combatted. This is the view of the Catholic church, I think, for example.

So there are perhaps more than two sides of the debate. Which ones are antinomial, which ones reasonable, I don't know. I tend to think 2) and 4) make more sense than 1) and 3).

 
At 3:50 PM, Blogger Xavier Marquez said...

I don't think 1) and 2) are mutually exclusive. Let's take an example from a different realm: if my computer prints my document, I can "explain" that by saying that a certain number of transistors entered such and such states, current flowed through such and such wires, etc. But I can simply say that I pressed the "print" button. Both explanations apply depending on the "level" I am interested in. (In fact, this is generally true of computers, which is why they are of such interest to philosophers of mind). Same for animals. You could presumably explain your cat's motion towards a plate of food by talking about neurons and the like, but you could simply say that it was hungry, without simply saying that this is a smokescreen for the "real" explanation. Both explanations are real depending on the context.

 
At 3:54 PM, Blogger Xavier Marquez said...

I see your point more clearly now. 1) by itself is certainly not very helpful in moral evaluation (or in everyday explanation, either). The question is whether it changes anything about our moral evaluation generally.

Take your printer example. Why do we not say that the computer is evil? I would be tempted to say that we would say that the computer is evil if we had no idea of its functioning . It's (in part) because we know how computers work (more or less) that we are tempted to say that the computer is defective, not evil.

On the other hand, these evaluations can be a function of complexity. I recently read an article in the New Yorker about the best chess programs in the world. Not only can these programs beat any human grandmaster alive, they also exhibit different "styles" of play: people say they are more or less creative, aggressive, etc. These are not evaluations that were applied to chess programs only a decade ago, but now people can look at a game played by Hydra and say that it exhibits creativity and cunning. In a sense we "know" how these programs work (i.e, the programmer can look at the code, and we know how evaluation functions for chess generally work) but in another sense we don't - the functioning of these complex programs is sufficiently opaque that we resort to ideas such as aggressiveness, creativity, etc.

My point being that even though 1) is not directly relevant to moral evaluation, it might change the range of proper evaluation - if we knew that 1) and 2) were directly connected.

 
At 8:29 PM, Blogger Xavier Marquez said...

Hmm, Jeff, you are right that it is (at least somewhat) contradictory. (Though it is an old philosophical topos since at least Plato in the LAWS that true understanding of human beings renders a strong notion of responsibility at least moot).

Let me try to save something from the comment, though.

There is a sense in which discoveries about how human beings function change the range of things for which we hold them accountable, or at least the way in which we hold them accountable. This does not mean that such discoveries inevitably tend to diminish human responsity, though sometimes they do so. Thus, certain people (regarded before as "criminal") may be placed in the "not responsible" category based entirely on discoveries about the structure of sickness. (Dennett has a useful discussion of this in Freedom Evolves). But the opposite may happen too: as a result of improved understanding of human beings, certain people who were earlier deemed "non-responsible" individuals may be placed in the "responsible" category.

So let me go back to the printer example: when I say that we know how the printer "works" and thus we rightly place it into the non-responsible category, I mean that we know a printer is not a machine capable of making rational decisions, hence something we should place in the "non-responsible" category. But if we instead found, per impossibile, that the printer was in fact capable of making decisions - that it was capable of weighing my commands against some internally generated (let's leave aside how these are to be defined for the moment) set of desires, for example, we might decide to place it into the "responsible" category. We use certain things as evidence of the responsibility or non-responsibility of the printer for the paper jams; and thus as evidence for the printer's effects being malfunctions or evil. This evidence, however, is always subject to error.

More generally, it may be that our judgments of what falls into the responsible and the non-responsible category depend in crucial ways on our understanding of how the thing (or the person) "works" to create effects in the world, and this is something that modern science can (potentially) illuminate. (It can also obfuscate it, if given enough conceptual unclarity to begin with).

From another point of view, it may also be that we get to "evil" from a continuum that starts at malfunction - there may be no bright dividing line between pure malfunction and genuine responsibility, though if you go far enough in either direction you find strictly defined cases. (Think of cases of hard chemical addiction leading to crime; though the law needs to set relatively bright dividing lines, these aren't always non-arbitrary from a philosophical point of view).

Thus, while a printer's complexity may only be enough to qualify its paper jams as malfunctions, it is not so hard to imagine (perhaps I read too much SF at some point, though) a more complex machine being capable of "evil" actions, and indeed being held to account for them.

More importantly, however, we could argue that nothing that we can discover about human beings in general is going to change our image of them as responsible individuals in general, though some categories or groups of people might change positions in the responsible/non-responsible continuum precisely as a result of things we discover about us(and their capacities).

 

Post a Comment

<< Home