PDF the minds i fantasies and reflections on self soul daniel c dennett. PDF of consciousness explains our insatiable search for meaning daniel bor PDF. Daniel Dennett's naturalistic account of consciousness draws some person, that person might be Daniel Dennett, a seventy-four-year-old. evidence-based view of the world, philosopher Daniel Dennett makes Breaking the spell: religion as a natural phenomenon / Daniel C. Dennett. p. cm.
|Language:||English, Spanish, Dutch|
|ePub File Size:||24.84 MB|
|PDF File Size:||16.82 MB|
|Distribution:||Free* [*Sign up for free]|
CONSCIOUS N ESS. EXPLAINED by Daniel Dennett. "His sophisticated discourse is as savvy and articulate about good beer or the Boston Celtics as it is about. Daniel C. Dennett. Now that I've won my suit under the Freedom of Information Act, I am at liberty to reveal for the first time a curious episode in my life that may. We've been a team for fifty years, and she is as responsible as I am for what we, together, have done. DANIEL C. DENNETT. Blue Hill, Maine.
Yuki Oshima Ontology of Daniel Dennett Yuki Oshima, Kyoto Univ. Introduction 2. Duality in ontology of Dennett 3. Preceding studies 4.
This would be to confuse the norm they follow with what gets by in their world. We could claim in a similar vein that people actually believe, say, all synonymous or. That is a lot to expect of one concept. If one wants to get away from norms and predict and and explain the "actual. The concept of an intentional system explicated in these pages is made to bear a heavy load.
It has been used here to form a bridge con- necting the intentional domain which includes our "common-sense" world of persons and actions. Michael Arbib and Keith Gunderson presented papers to an American Philosophical Association symposium on my earlier book.
This implies that what two things have in common when both are correctly attributed some mental feature need be no independently describable design feature. In the first section the ground rules for ascribing mental predicates to things are developed beyond the account given in Chapter 1.
I Suppose two artificial intelligence teams set out to build face-recog- nizers. We will be able to judge the contraptions they come up with. Content and Consciousness. The second section threatens another familiar and compelling idea. I would rather have it considered an introduction to the off- spring theory. In spite of a few references to Arbib's and Gunderson's papers and my book.
Our expectations of face-recognizers do not spring from induction over the observed behavior of large numbers of actual face-recognizers. There I claimed that since intentional stance predictions can be made in ignorance of a thing's design—solely on an assumption of the design's excellence—verifying such predictions does not help to confirm any particular psychological theory about the actual design of the thing.
But obviously there must be some similarity between the two face- recognizers. Not only will we want a face-recognizer to answer questions correctly about the faces before it. The logic of the concept of recognition dictates an open-ended and shifting class of appropriate further tasks. These conditions and criteria are characterized intentionally.
From the biological point of view. But what implications about further similarity can be drawn from the fact that their intentional characterizations are similar? Could they be similar only in their intentional characterizations? Consider how we can criticize and judge the models from different points of view. Or one might rely on a digital computer. At the physical level one might be electronic. They will often both be said to believe the same propositions about the faces presented to them.
For one thing. Since the Ideal Face-Recognizer. This much is implicit in the fact that the concept of recognition. The con- traptions could differ this much in design and material while being equally good—and quite good—approximations of the ideal face-recog- nizer. The design of a face- recognizer would typically break down at the highest level into sub- systems tagged with intentional labels: These intentionally labelled subsystems themselves have parts.
From the point of view of engineering. As guardians of the stock of common mentalistic concepts. From an "introspective" point of view. The relevance of these various grounds waxes and wanes with our purposes. Reply to Arbib and Gunderson 25 function or even chemistry to known elements in the brain. Let us see how this could work in more detail.
Since it seems we must grant that two face-recognizers. If we are attempting to model "the neural bases" of recognition. Now as "philosophers of mind". If we are engaged in "artificial intelligence" research as contrasted with "computer simulation of cognitive processes". This brings me to Arbib's first major criticism.
Arbib finds this "somewhat defeatist". Other states or parts may not suggest any intentional characterization—e. While no doubt some of these ascriptions will line up well with salient features of the system's design.
S and T"s being in the same belief state need not amount to their being in the same logical state. When we are in a position to ascribe the single belief that p to a system.
Now we can see that what Arbib suggests is right. There need not. If we put intentional labels on parts of a computer program. I had said that in explaining the behavior of a dog. I wish to maintain physicalism—a motive that Gunderson finds congenial—but think identity theory is to be shunned. I am not thereby wrong. One might want to object: If one assigns the intentional label "the belief-that- p state" to a logical state of a computer.
Our imagined face-recognizers were presumably purely physical entities. Assignments of intentional labels. The sort of precision I was saying was impossible was a precision prior to labelling.
This idealizing of intentional discourse gives play to my tactic of ontological neutrality. The inescapably idealizing or normative cast to inten- tional discourse about an artificial system can be made honest by excellence of design. Here is one reason why. Reply to Arbib and Gunderson 27 program. But if Arbib is right. The usual seductions of identification are two. I think we should be able to see that identity theory with regard to them is simply without appeal.
Journal of Philosophy. Intention 2nd ed. So if we are to have identity. For some ascriptions of belief there will be. In the first place there is no telling how many different intentional states to ascribe to the system. This should not worry us. The interest of the account is that it describes an order which is there whenever actions are done with intentions.
The latter motive has been all but abandoned by identity theorists in response to Putnam's objections and others. I think: If we then restrict ourselves for the moment to the "mental features" putatively referred to in these ascriptions. In baseball. Reply to Arbib and Gunderson 29 of course. At this point Gunderson. We should not suppose he is alluding to recurrent activities of blemish-ignoring. The parallel is not strong enough.
Grammar can be misleading. Not only is it not the case that oarsmen catch real live or dead crabs with their oars. The pursuit of identities. It is tempting to deny this. To see what they are getting at. In crew. In this instance the distinction would seem to yield the observation that so far only some program-receptive features of mentality have been spirited away. Suppose a programmer informs us that his face-recognizer "is designed to ignore blemishes" or "normally assumes that faces are symmetrical aside from hair styles".
So there is some plausibility in relegating these putative events. The former. As Gunderson says. Doesn't my very Rylean attempt fall down just where Ryle's own seems to: It is certainly clear that the intentional features so far considered have a less robust presence in our consciousness than the program-resistant variety.
I agree with Gunderson that it is a long way from ascribing belief to a system to ascribing pain to a person especially to myself. That is one reason why Gunderson is unsatisfied. He sees me handling the easy cases and thinks that I think they are the hard cases.
II In Content and Consciousness I proposed to replace the ordinary word "aware" with a pair of technical terms. To build a self. I think they are the building blocks. Content is only half the battle. It is embarrassing to me that I have given Gunderson and others that impression. Let us suppose. Arbib points out that since a behavioral control system can tap sources of information or subprograms that find no actual exploita- tion in current behavior control but are only "potentially effective".
I think. What I want to establish is that these two notions wrongly coalesce in our intuitive grasp of what it is to be conscious of some- thing. But what I will argue is that Arbib has gravitated to the emphasis on control at the expense of the emphasis on privileged access. So Arbib offers the following definition which in its tolerance for hand- waving is a match for my definitions—he and I are playing the same game: This half of the chapter might be read to more advantage with Part III.
As Nagel would put it. If we want to attract the attention of a dog so he will be aware of our commands. Many disagree with me. It is just this. They even may be seen to exhibit signs of self-consciousness in having some subsystems that are the objects of scrutiny and criticism of other.
First Arbib points out. If we are to capture the program-resistant features in an artificial system.
Coming to believe that p may be an event often or even typically accompanied by a rich phenomenology of feelings of relief at the termination of doubt.
Yet while they can be honored with some mental epithets. Somehow these systems are all outside and no inside. This will require giving the system something about which it is in a privileged position. Now I want to claim first that this incorrigibil- ity.
They exhibit a form of subjectivity. On that point there is widespread agreement. This brings me to Arbib's criticisms of the notion of awareness 1 for they call for some clarifications and restatements on my part. And since even for creatures who are genuine selves. What I am saying is that belief does not have a phenomenology.
I suspect. We are. The content of one's reports does not exhaust the content of one's inner states. Reply to Arbib and Gunderson 33 to play the role prescribed for them in system theory" p.
We are confused about consciousness because of an almost irresistible urge to overestimate the extent of our incor- rigibility.
We must grant the existence of self-deception. The trick is not to confuse what we are. Once we see just how little we are incorrigible about. At best. I think it provides the way out of a great deal of traditional perplexity. Our incorrigibility is real. I agree. Arbib observes that "it follows from any reasonable theory of the evolution of language that certain types of report will be highly reliable.
Smith's infallibility has been downloadd. Then being in state A will ipso facto involve being in state B. But suppose we break down state A into its component states.
Smith's verbal apparatus may execute that instruction spuriously. Following Arbib. Suppose Smith now says. Using Putnam's analogy. This does not leave Smith being infallible about very much.
Let us suppose that at some moment part of the content of Smith's state of awareness! But the trick. Since Smith is a well-evolved creature with verbal capacities. But does this amount to anything at all of interest? But we wouldn't want to do that in any case. It is just that the concepts of a report and of awarenessl are so defined that Smith has an infallible capacity to report his states of awareness!.
But if that is what it is for Smith to report. And we might be interested in that. Smith has not reported that he sees a man approaching if he makes a verbal slip.
This does not mean that we can give independent characterizations of some of Smith's utterances. But would we ever want to know what anyone's state of aware- ness! If we wanted to know whether what Smith said was what he meant to say.
Reply to Arbib and Gunderson 35 particular instance. That would be miracu- lous.
When we want to know what state of awarenessj Smith is in. Being asked. Or we might someday be able to take Smith's brain apart. Our coming to mean to say something is all the access we have. Smith enters the community of communi- cators. Our infallible. If we consider this group of persons and ask if there is some area of concern where Smith is the privileged authority. Smith can do better. I have said that the extent of our infallibility.
The picture I want to guard against is of our having some special. When we ask him what state of awareness! But Smith doesn't have to go through any of this. If one supposes that it is our thinking that actually controls our behavior. Gunderson can quite safely assume that his judgment is not a piece of self-deception. There is more than one verb that straddles the line as "suppose" does. Repy to Arbib and Gunderson 37 currently controlling the rest of our activity and attitudes.
The two notions of thinking can each lay claim to being ordinary. It would be odd to suppose in the sense of "judge" that there are gophers in Minnesota without supposing in the sense of "believe" that there are gophers in Minnesota.
Consider any intentional sentence of the form "I that there are gophers in Minnesota" where ' ' is to be filled in by an intentional verb 'believe'. Gunder- son's episode of meaning to himself that there are gophers in Minne- sota is something to which his access is perfect but it is itself only a highly reliable indicator of what Gunderson believes. Lacking any remarkable emotional stake in the proposition "There are gophers in Minnesota".
Some philosophers have supposed otherwise. If one supposes on the other hand that one's thinking is one's "stream of consciousness".
Arbib champions one. When Arbib says of a verbal report that "the phrase is but a pro- jection of the thought. I don't think it is wrong to think of thought in this way. I am denying. In just the same way someone would be mistaken who thought there was some physical thing that was all at once the voice I can strain. I even think that in the last analysis one is not thinking about thought unless one is thinking of something with both these features.
Thus utterances like 'I see a man approaching' express mere aspects of the robot's total state". The pain in my toe. The current thoughts. And I am insisting that thoughts and pains and other program-resistant features of mentality would have to have both these aspects to satisfy their traditional roles.
I am not denying that there are episodes whose content we are incorrigible about. Sam the reputable art critic extols. Presum- ably if a were true Sam would deny it to his grave. In Intention2 she seems to be arguing that the only information about a person that can be brought to bear in a determination of his beliefs or intentions is information about his past and future actions and experiences.
This is often plausible. Suppose Jack Ruby had tried to defend himself in court by claiming he didn't know or be- lieve the gun was loaded. Sometimes one's biography seems com- pletely compatible with two different ascriptions of belief. But in other cases the view is implausible. Is it in principle possible that brain scientists might one day know enough about the workings of our brains to be able to "crack the cerebral code" and read our minds?
Phi- losophers have often rather uncritically conceded that it is possible in principle. Two different hypotheses are advanced: Given even the little we know about his biography. I have been so far unable to concoct a proof that it is incoherent. Philosophy and Phenomenological Research. This reduces the claim that there is an inner language.
Couldn't the brain scientist in principle work out the details of Sam's belief mech- anisms. Adequate psychological theories must reflect this knowledge and add to it. So adequate models must have states that correspond to beliefs. I admit to finding the brain-writing hypothesis tempting. Of course. Where there is such repre- sentation. XXIX Having deciphered the brain writing. I think many of our intuitions support the view that Sam really and objectively has one belief and not the other.
Gilbert Harman offers the first few steps of an answer: We know that people have beliefs and desires. Could the functional organization of the brain be so inscrutable from the point of view of the neurophysiologist or other physical scientist that no fixing of the representational role of any part were possible? Could the brain use a system that no outsider could detect?
In such a case what would it mean to say the brain used a system? I am not sure how one would go about giving direct answers to these ques- tions. It is Harman's next point that strikes me as controversial: This second point raises some interesting ques- tions. Assuming Harman's claim survives these questions. Two more steps are needed. Again Harman gives us the first step: In a simple model.
Are all rep- resentations bound up in systems? Is any system of representations like a language? Enough like a language to make this identification more useful than misleading?
Or is Harman's claim rather that what- ever sorts of representations there may be. Brain Writing and Mind Reading 41 The first point.
Diehard peripheralist behaviorists may still wish to deny this. LXI Wilfrid Sellers. Just so long as there is a finite number of different "languages" and "multi-lingual" functional elements to serve as interpreters.
Tokens and "strings" of tokens may of course align them- selves in physical dimensions other than those of natural language. That is.. This is a "practical" point. Representations of things believed would be stored in one place. Otherwise the language will be unlearnable.
Interaction with the environment would produce new representa- tions that would be stored as beliefs. Inferences could produce changes in both the set of beliefs and the set of desires. That does not mean that all tokens of a type must be physically similar. Needs for food. Some formulations of it are forbidden us on pain of triviality.
In any case we already have enough to set some con- ditions on the brain-writing hypothesis. I think the following six conditions will serve to distinguish gen- uine brain-writing hypotheses from masqueraders. What physi- cal feature is peculiar to spoken and written tokens of the word "cat"? There must simply be a finite number of physical sorts of token of each type.
There need not be a single generative grammar covering all representations. If too many unlikely beliefs or obvious untruths appear in the belief store.
Blocking this input and substituting random input produces no loss of flying rhythm. A person who discovered such a marvel would be roughly in the same evidential position as a clairvoyant who. It is worth mentioning only to distinguish it from more important obstacles to the hypothesis. If tokens turned out not to be physically salient—and this is rather plaus- ible in the light of current research—the brain-writing hypothesis would fail for the relatively humdrum reason that brain writing was illegible.
Brain Writing and Mind Reading 43 Tokens might bear physical similarities. It must be dem- onstrated that the physical system in which the brain writing is accom- plished is functionally connected in the appropriate ways to the causes of bodily action.
To give a more plausible example. Perkel and T. Neurosciences Research Symposium Summaries. The sentences yielded by our neurocrypto- grapher's radical translation must match well with the subject's mani- fest beliefs and desires. Consider the two outcomes. Let us suppose he can do any rewiring. If the subject declines to assert or assent to these anomalous sentences.
Tom may say. Quine on radical translation. Tom is sitting in a bar and a friend asks. I don't have an older brother! I have an older brother living in Cleveland. If our trans- lation manual yields sentences like "My brother is an only child" and pairs of sentences like "All dogs are vicious" and "My dog is sweet- tempered" one of several things must be wrong. This does not show-that wiring in beliefs is impossible. We do permit some small level of inconsistency. Let us suppose we are going to insert in Tom the false belief: Whose name?
See Chapter 1.
This rewiring will either impair Tom's basic rationality or not. A million. For in addition to all the relatively difficult facts I have mastered. It has the capacity to extract axioms from the core when the situa- tion demands it. Now how will. How much room do we need? Marvin Minsky has an optimistic answer: Per- haps it does this by storing the information that what is black is not brown. Brain Writing and Mind Reading 45 Now suppose we have a brain-writing theory that meets all of these conditions: Surely I can think of more than a thousand things I know or believe about salt.
To do this. Then there is my knowledge of arithmetic: This system is going to take up some room. Minsky's hundred thousand by the activity of an extrapolator-deducer mechanism attached to the core library. So let us attach such a mechanism to our model and see what it looks like. New York is not on the moon. The objection. My beliefs are apparently infinite. I therefore feel that a machine will quite critically need to acquire on the order of a hundred thousand elements of knowledge in order to behave with reasonable sensibility in ordinary situations.
But surely his figure is much too low. To give one more case. I would not want to claim. Yet it is nothing conscious that I do in order to perceive depth.
Could it do without brain writing altogether? I think we can get closer to an answer to this by further refining our basic model of belief. In its own core library of brain-writing sentences? If it has a core library. I have already mentioned the possibility of codes representing information about the tension of eye muscles and so forth.
Do I have a belief about how it used to be that grounds my current judg- ment that it has changed? If so. At another level there is the information we "use" to accom- plish depth perception.
The brain must store at least some of its information in a manner not capturable by a brain-writing model. As one team sums it up: Representations apparently play roles at many different levels in the operation of the brain. His per- formance indicates that he has caught on to commutativity. This a priori point has been "empir- ically discovered" in the field by more than one frustrated model- builder.
Closer to home. Psychologists ascribe to us such activities as analyzing depth cues and arriving at conclusions about distance based on information we have about texture gradients. Suppose we partition our information store into the part that is verbally retrievable and the part that is not. Far from it. In- cluded in this group will be the bits of factual knowledge we pick up by asking questions and reading books. I have deliberately not looked it up in the encyclopedia.
The picture that emerges is not. If any representations are stored in brain writing.
Sometimes we salt away a sentence because we like the sound of it. Whether they did or not. In Chekhov's Three Sisters. With regard to this group of representations Min- sky's figure of a hundred thousand looks more realistic. Our preanalytical notion of belief would permit young children and dumb animals to have beliefs. Then Irina dreamily repeats it: If ever it seems that we are storing sentences. No doubt if someone offered me a thousand dollars if I could tell him where Balzac was married.
For instance, in the webpage cited above, which featured a green light followed by a red one, the green light seems to turn red as it appears to move across to where the red light is. As Dennett notes, this is quite odd. For one thing, how could the irst light seem to change color before the second light is observed? Dennett entertains two options, both of which he discards. First, he considers the possibility that the observer makes one con- clusion, and then changes her memory when she sees the second light.
In this scenario, shortly ater the second spot goes into consciousness, the brain makes up a narrative about the interven- ing events, complete with the color change midway through. He then suggests a second alternative. More spe- ciically, the irst spot arrives in preconsciousness, and then, when the second spot arrives there, some intermediate material is created, and then, the entire, modiied sequence is pro- jected in the theater of consciousness. So the sequence which arrives at consciousness has already been edited with the illusory intermediate material Dennett , p.
Dennett then asks: What reason would we have for choosing one interpretation over the other? He contends that there is no way, even in principle, to select one interpreta- tion over the other, for there is no way to demarcate the place or time in the brain in which material goes into consciousness Dennett , pp.
He then concludes that the putative fact that there is no way of distinguish- ing between the two interpretations lends plausibility to the Multiple Drats Model. For according to the model, there is no concrete place or time in which material is, or is not, in consciousness. Another major source of concern has been whether there is really no diference, even in principle, between the two interpretations.
In his response to critics, he explains that the reason that the two interpretations cannot be distinguished is not because, in general, there is no such thing as phenomenal consciousness, but because such an extremely small timescale is involved. Certain sorts of questions one might think it appropriate to ask about them, however, have no answers because these questions presuppose inappropriate.
For according to one version, even at such a small timescale, there would be conscious experience; the conscious events would simply not be remembered. In the other scenario, the conscious experience would not have occurred at all. Indeed, even if the subject herself could not report a diference because, for instance, she could not remember the experience, it seems there would be, at the very least, an in principle way to tell the diference Korb ; Seager For if one sequence is held up, before entering consciousness, and the other is simply recalled diferently, there would be underlying brain states which difer; otherwise, diferences in mental processing would fail to supervene on physical states.
No physicalist, includ- ing Dennett, would be prepared to accept this. In light of this, there should be, at least in principle, a measurable diference between the Orwellian and Stalinesque interpretations, and furthermore, such a diference may even fall in the realm of future, higher resolution, brain imaging techniques. It is only the claim that phenomenal consciousness itself does not exist, at least apart from probes, that would justify the strong conclusion that there is no diference between the two interpretations Block Leaving the phi illusion, let us now ask about the plausibility of the Multiple Drats Model itself.
It has been more than a decade since the Multiple Drats Model was devel- oped, and there are features of the model which have clearly withstood the test of time. It is widely accepted that processing in the brain is massively parallel and that there is no cen- trally located homunculus that views all experiences passing before it. However, it is worth mentioning that the idea of massive parallelism was certainly not original to Dennett, and even back in very few scientists believed that consciousness all came together at one place in the brain.
But to fully judge the plausibility of the model, we might ask for the details of the model, because at this point in our discussion at least, we have not really laid out a model of consciousness, but an interesting contrast.
But to have a model of consciousness, there needs to be an answer to the question: What sort of program is the machine running? Dennett has expressed strong sympathy with the Pandemonium model of Oliver Selfridge , which was essentially an ante- cedent to connectionism. Pandemonium is a pattern recognition system that consists in four layers see Figure As Dennett surely knew, Pandemonium is far too simple to be a model of consciousness.
But what fascinated Dennett was the parallel nature of Pandemonium, in which there is no central executive. Indeed, explanation in cognitive science gener- ally proceeds by the method of functional decomposition, a method which, put simply, explains a cognitive capacity by decomposing it into constituent parts, and specifying the causal relationships between the parts, as well as decomposing each part into further con- stituents, and so on Cummins So, trying to further explain the Multiple Drats Model, it appears that, in addition to appealing to massive parallelism, the model involves a kind of computational function- alism without a homunculus.
Furthermore, we do not yet have a model of consciousness, for although there is an appeal to the method of functional decomposition, not even the most basic functional decomposition of consciousness has been ofered. While Dennett shied away from proposing a particular theory of consciousness in Consciousness Explained, he expressed sympathy with the Global Workspace GW theory of consciousness, and the closely related Global Neuronal Workspace theory of consciousness, and he has recently re-emphasized his alliance with this position Dennett , According to the GW theory, the role of consciousness is to facilitate information exchange among multiple parallel specialized unconscious processes in the brain.
At any given moment, there are multiple parallel processes going on in the brain which receive the broadcast. When in the global workspace the material is processed in a serial manner, but this is the result of the contributions of parallel processes which compete for access to the workspace.
At least at irst, there are commonalities between the GW theory and the Multiple Drats heory. So perhaps now we are equipped to return to the question of the plausibility of the Mul- tiple Drats Model.
Unfortunately, while the Global Workspace theory might provide the beginnings of an information-processing model of consciousness, there are signiicant points of tension between it and the Multiple Drats Model. For one thing, the GW theory has been categorized as a kind of theater model Blackmore , p.
It appears not, for according to the GW view, there is a deinite sense in which certain mental states are in consciousness, while others are not: states are conscious when they are in the global workspace Baars and chapter First, the GW view does not seem to require a probe for a state to be broadcast into the workspace; what is conscious is not determined by what is probed. So the contents of consciousness will difer according to each theory.
Here, it is important to note that a central system is not identical to a CPU. Again, a central system is a subsystem of the brain that integrates material from diferent modalities; a CPU, on the other hand, is a command center that executes every, or nearly every, command in a computational system. As Stan- islas Dehaene and Jean Pierre Changeux explain: he model emphasizes the role of distributed neurons with long-distance connections, partic- ularly dense in prefrontal, cingulate, and parietal regions, which are capable of interconnecting multiple specialized processors and can broadcast signals at the brain scale in a spontaneous and sudden manner.
Indeed, the appeal to a central system by advocates of the GW theory is not limited to the work of Dehaene and Changeux. However, given the points of tension, Dennett cannot incorporate GW detail into his theory. He has addressed questions about the nature of mind and consciousness, the possibility of freedom, and the significance of evolution to addressing questions across the cognitive, biological, and social sciences. Some of the contributions addre Some of them provide a fresh take on a Dennettian theme, and others extend his views in novel directions.
But each of them aims to be readable, and approachable. Forgot password? Don't have an account? All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use for details see www. OSO version 0.
University Press Scholarship Online. Sign in. Not registered? Sign up.