An interview by Shaun Gallagher,
to appear in
Journal of Consciousness Studies (2002)
1. Action and Consciousness of Action: The importance of not being conscious
SG: In your work you have used the concept of a motor schema as a way to talk about very specific aspects of motor control. Such schemas are the elements of higher-order representations of action. In the transition from the level of motor schema to the level of action representation, do we somehow move from something that is automatic to something that is intentional ?
MJ: First, I should say a few words for defining
the concept of schema. This is a very old concept in neurophysiology. People
in the second part of the XIXth century, like Charlton Bastian, held that
past actions were stored as ‘conceptions of movement’ in the sensori-motor
cortex, ready to be used when the same actions were reinitiated. Later,
the terms of motor engrams and finally motor schemas were used. As you
see, this concept comes close to what we would now consider as motor representations.
More recently, in the hands of Tim Shallice and Michael Arbib, the concept
of motor schemas has evolved into more elementary structures which can
be assembled to form representations for actions. In other words, the concept
of schemas is a way of describing the lower levels of a motor representation
: it is a way of breaking through the levels and going down to the most
elementary one, perhaps at the level of small neuronal population.
I'm presently interested in characterizing the motor representation in no more than two or three levels. Below the lowest of these levels, there must still be other levels, perhaps with schemas, but this description goes beyond my present interest. The most elementary level I am investigating is that of automatic action, which allows people to make actions with fast corrections and adjustments to a target. Although we can do that very well, we remain unaware of what is happening, until the target is reached. Above this level, there is another one where people are able to report what they have done. They realize that there was some difficulty in achieving the task -- that they have tried to go left or right, that they had to make an effort, that they have tried to do their best. Finally, there is a level where people try to understand the reason why that particular action was difficult. In experiencing the difficulty in completing the task, they may ask themselves if it was a difficulty on their part, or a difficulty from the machine, or a difficulty from someone else controlling their hand. These questions are closer to the issue of self-consciousness than to that of mere consciousness of the action.
There are a number of possibilities for studying
these levels, including in pathological conditions. This is how we
came across Chris Frith's theory of central monitoring of action (Frith,
1992), where it was stated that a particular type of schizophrenic patients
should be unable to consciously monitor their own actions. We now
have a paper in press where we clearly demonstrate that such patients do
have a functional action monitor : they know what they do and they know
how to do it. They are able to resolve rather complex visuomotor conflicts
that have been introduced in the task to make it more difficult. However,
they appear to fail at the upper level, which makes them unable to understand
where the perturbation comes from -- is it coming from their side, or from
the external world, or from the machine ? Of course, this type of information
can only be obtained by asking the right questions to these patients; simply
looking at their motor behavior is not sufficient.
Coming back to your question on the schemas, the examples above show that the levels of representation of the action, as I have outlined them here, cannot be broken down into schemas – or, at least, schemas become irrelevant in this context.
SG: Instead of schema, there is no other word that you would use? You would have to just look at each case and ask the best way to describe it?
MJ: Yes, and that's the difficulty.
SG: And it would depend on whether you are asking about agency as opposed to simply trying harder to correct action in an experimental situation.
MJ. Yes. We will discuss agency later. Let us
first examine the automatic level, where the action seems to go smoothly
by itself, and where there is very little consciousness of the action.
Fourneret and I designed an experiment where subjects had to reach a visual target by tracing a line on a graphic tablet. The display was such that they could not see their hand, the only information they had was vision of the line and of the target shown on a computer screen. On some trials, a bias was introduced, such that, in order to get to the target, subjects had to trace a line deviated by the same amount as the bias. When they reached the target, the line they saw on the screen was thus very different from the line they had drawn with their invisible hand (Fourneret and Jeannerod, 1998). For biases up to ten degrees subjects performed very well : they reached the target and did not notice any problem -- they were unaware that the movements they had made with their hands were different from those of the lines they had seen. However, when the bias was increased beyond this value of 10 degrees, subjects suddenly became aware that the task was difficult, that they made errors in reaching for the target: as a consequence they consciously tried to correct for these errors. They were able to describe their attempts "It looks like I am going too far to the right, I have to move to the left, etc."
This result contrasted very sharply with what we observed in a group of patients with frontal lobe lesions, whom we studied in the same apparatus. These patients had a typical frontal syndrome and were free of psychotic symptoms, like delusions, for example. The main effect was that they kept using the automatic mode as the bias increased, and never became conscious that the task was becoming more and more difficult and that they were making errors. This behavior was maintained for biases up to 40 degrees (Slachevsky et al, 2001).
At this point, it becomes tempting to make suggestions as to the brain structures involved in these mechanisms. First, one can assume, as many people would agree, that the automatic level is under the control of the parietal cortex; second, the above experiment points at prefrontal cortex as the level where action is monitored. The question remains to know in which part of the cortex one would locate the mechanism for the third level, agency.
With Pierre Jacob, a philosopher, we are trying to write a book on the problem of “two visual systems” – one system, as you know, for object recognition and another one for acting on these objects. The main contribution of the book should be to clarify what it means to act automatically, to have no real subjective experience of what we are doing. We may have an experience of what we have done afterwards, but we are usually unaware of what we do when we are actually performing the action, of taking an object, for example. In spite of being unaware of what we do, we make perfect adjustments of the fingers to the shape or the size of the object. Thus, the general idea is that part of our visual system drives our behavior automatically, with a minimal participation of the other part of the system which contributes to object recognition.
SG: So when I reach for a glass, I may have an awareness of the glass, but I am not conscious of my reach or my grasp or what my fingers are doing.
MJ: Yes, you are probably conscious of the fact that you want to drink, of the general purpose of the action. By contrast, you will become fully conscious of the action itself if your movement fails, or if the glass is empty. The question we raise is how is it that the visual system can select the proper information (the shape of the glass) and transform it, unknowingly to the subject, into precisely adapted movements. The main purpose of this work is to try to understand what it means to produce actions directed at a visual goal without being aware of that goal.
SG: Such actions would depend on motor representations, which are not yet motor images.
MJ: It is difficult to speak of motor images at
this stage, simply because an image, by definition, should be conscious.
Yet, some people like Lawrence Parsons now tend to assume that there are
two types of motor images. First, there are motor images that we
create in our mind as conscious representations of ourselves acting.
Those are the overt, conscious, images. But we may also use implicit
motor image strategies for producing actions. The argument for assuming
that these strategies indeed rely on some sort of motor imagery is slightly
indirect: it is based on the fact that the time it takes to mentally make
the action is a function of motor contingencies.
This point can be illustrated by an experimental example. Imagine that you are instructed to take a glass with marks on it where you are supposed to place your thumb and index finger. If the marks are placed in an appropriate position, the action is very easy, and the time to take the glass is short. If, on the contrary, the marks are placed in an odd position such that you have to rotate your arm in an awkward posture to grasp the glass, the action time increases. In the second part of the experiment, the glass is also presented with marks at different orientations, but, this time, you don't take it. Instead, you are instructed to tell (by pressing different keys) whether the action of grasping the glass would be easy or difficult. The time it takes to give the response is a function of the orientation of the marks, in the same way as for the real action (Frak et al, 2000).
The interpretation we gave to this result is that an action has to be simulated before it can be performed. This simulation process is made at a level where the contingencies of the action, like the biomechanics of the arm, are represented. The simulation will take longer in the odd condition than in the easy condition, as if the arm was mentally “rotated” in the appropriate posture before the grasping movement is executed, or before the feasibility response is given. This rather complex process is entirely non-conscious.
In fact, our experiment expands the Parsons (1994) experiment, where subjects have to identify whether a hand shown to them at different angles of rotation is a right hand or a left hand. The time for the subject to give the response is a function of the angle of rotation of the target hand: this is because the subject mentally rotates his own hand before giving the response. In addition, the pattern of response times suggests that this mental rotation follows biomechanically compatible trajectories: obviously, we cannot rotate our hand in any direction without taking the risk of breaking our arm ! Again, this process remains entirely non-conscious.
A strong argument in support of the hypothesis of mental simulation can be drawn from neuroimaging experiments: they show that the motor system of subjects is activated also when they think about a movement (a conscious process), or even when they attempt to determine the laterality of the hand they are shown.
SG: Some of your work suggests that in some way the hand is quicker than the eye in certain circumstances. More generally, consciousness lags behind the action. But that doesn't mean that consciousness slows down movement, or does it?
MJ: Well, if you had to wait to be conscious of what you were doing, you would make your actions so slowly that you would be destroyed by your first ennemy to come along. The idea is that the mechanism that generates fast and automatic actions is an adaptive one. Another example is the reaction to fearful stimuli: the body reacts, with an activation of the vegetative system, the preparation to flee, and it is only afterwards that you consciously realize what produced the emotion.
SG: Joseph LeDoux's work on fear and the amygdala.
MJ: Yes, this is something you find in LeDoux's work. The purpose of the emotions is to activate the neurovegetative system, to warn the brain of the danger. Whatever the decision, to run away or to attack, this is a decision that is taken implicitly.
SG: Yes, a good example of this occurred when I was attending a seminar at Cornell with a friend of mine. My friend got out of the passenger side of the car and found himself jumping over the car. He only then realized that he had seen a snake.
MJ: That's a good example, because a snake is one of these things that we are attuned to fear. In our experiment with Catiello that you were mentioning (Castiello and Jeannerod, 1991; Castiello, Paulignan, and Jeannerod, 1991), the subject had to reach for an object as fast as possible. Sometimes, the object changed its location in space or its appearance (it became biger or smaller) at the exact time where the subject was beginning his reaching movement. The subject was instructed, not only to reach for the object, but also to signal by a vocal utterance the time when he became aware that the object had changed. Even before we made the proper experiments for measuring this time to awareness, we had noticed that the people were already grasping the object when they told us that they had seen the change. When we made the experiments, we realized that subjects could report that a change had occurred only after a long delay, long after the movement had begun to adapt to that change. Indeed, this is just what your friend did when he jumped over the car before becoming aware of the presence of the snake. This is what we do when we drive a car and avoid a sudden obstacle: we become aware of the obstacle only later. We were able to measure this difference in time, which is something on the order of 350 msec. If you are driving a car and you wait until you become aware of the danger before you brake, do the calculation of the distance covered during 350 msec @ 40 mph(1). There is a great advantage of not being aware during doing something.
SG: You have done experiments with Jonathan Cole's patient, IW, [a patient who has lost his sense of touch and proprioception from the neck down and who must consciously think about his movements in order to control them -- see Cole, 1995; Gallagher and Cole, 1995]. IW indicates that he likes to drive. He says it is easier to drive than to walk. Yet his movements are not automatic actions; he has to be consciously aware of his movements. In driving, however, there is a good amount of visual proprioception -- Gibson's term for the part of visual information that specifies one's own movement through the environment. Could this visual proprioception help IW in various driving tasks?
MJ: That's one possible explanation, that he is a 'Gibsonian person', working with the optic flow, and generating fast reactions on this basis. This explanation works when one moves to an object, which is not always the case. Very often, whe are stationary in the visual world and only move our hand to reach for a target, for example. In this case, there is no optic flow to guide the limb. The Gibsonian model hardly generalizes to all situations.
SG: What do you find wrong with the Gibsonian explanation?
MJ: What I think is wrong is the direct link from perception to action, without having a representation in between the two. I think that one first has to store the goal of the movement, and then use it to guide the movement. This is the representation. This concept is now developed by authors who stress the existence of internal models that are generated prior to an action. Obviously, this isn't the case in the Gibsonian theory.
SG: Gibsonian theory talks about sensory feedback and visual proprioception, and that is all directly transformed into motor control. So they don't have a place for the forward mechanism of motor control. Is that the problem?
MJ: That's right. That's what I don't like about the Gibsonian theory. Gibsonians have the idea of a perception-action cycle, where information coming into the visual system drives action, and action modifies perception. This cycle is described in the work of Ulrich Neisser, for instance.
SG: Would that be a good model to explain passive or involuntary movement, where there is no forward control?
MJ: Well, the good point in Gibson's theory is that it considers visual information as a dynamic one. Visual information is not only about our position relative to stationary objects. Visual information continuously changes because the visual environment moves, trees move in the wind, water flows, etc. A second intuition Gibson had is that our own action in the environment makes the visual environment to change. These are nice intuitions, we certainly use this dynamic information from the environment. But it doesn't explain everything.
SG: Does consciousness, then, introduce something different into action? How is conscious movement different from non-conscious movement? The question I want to raise is whether this difference places qualifications on experiments when part of the paradigm is to call the subject's attention to what they are doing in regard to their movement.
MJ: In regard to this distinction between doing
things automatically and doing things consciously, I will refer to experiments
made by Mel Goodale and his colleagues. The typical experiment is
that there is a target coming on, and that you have move your hand to that
target. This is an automatic action. In a different set of trials,
there is a target coming on, and you have to wait for five seconds before
you move to it -- time enough for consciousness to influence the movement.
What they found is that the kinematics of the movement and its accuracy
are very different for the two conditions. In the conscious movement,
the velocity is slower and accuracy is poorer. This means that you
have lost the on-line control of the movement, which works automatically:
instead, you have used a system that is slower and is not adapted to doing
automatic movement. The principle of the automatic movement is that it
is based on a very short-lived representation. When you make a movement
you have to keep track of the target to get to it, and then to erase the
representation and to forget about it. You have to keep track of
the target only for the duration of the movement in order to make corrections
as needed. We can also use other systems, where the functions are stored
for a longer time: it is possible to do movements that way, according to
Goodale, by using the ventral visual pathway that indirectly reaches the
motor centers through prefrontal cortex. Normally, we don’t use this pathway
when we do movements. As I said before, both the automaticity of
the movement and the lack of consciousness of the motor process are essential
attributes for behavior, because if we didn't use the regular pathway for
actions (the dorsal pathway), our movements would be too slow. They
would be late, and they would be delayed with respect to their expected
effect. In addition, there would be a lot of useless information
in the system.
We may experience this idea of moving in the wrong way (using the wrong pathway) when we learn actions that are difficult. We first try to control every bit of movement, until we learn to do it naturally and forget about how to move.
SG: The whole aim, and what we call proficiency in movement, is to move without being conscious of how one is moving.
MJ: Yes to reach and grasp without paying attention to your hand, and so forth.
SG: When you speak of motor image, explicit motor images would mean that you are conscious of the movement -- consciously in charge of the movement. Would that imply a conscious access to the representation, or would it mean the creation of a motor image in a reflective manner?
MJ: What I mean by a representation is not conscious in itself. It is built on the basis of all sorts of information. But of course when it's there, there is the possibility of accessing it consciously, although probably not all the aspects of the representation are consciously accessible. I don't see how we could become conscious of everything involved in it. I've had this discussion when I wrote my BBS paper on mental representations (Jeannerod, 1994). One of the criticisms expressed in the commentaries was to question why I had called motor images what is simply motor memory. The commentators said that we can do an action, and then think about it, or reenact it: this would be nothing more than memory, and there would be nothing special about it. At this time, in 1994, I was already thinking that there is a difference between motor image and motor memory, but I did not have many arguments to defend this position. The arguments came later, when, with Jean Decety, we made neuro-imaging studies that told us that during motor images you have the same pattern of activation that you have during action. We found activation in areas like the dorsal prefrontal cortex and the anterior cingulate areas. But the most conspicuous activation was in the motor system itself, the system which is needed to really produce movements. I was thus comforted in the idea that motor imagination is more a covert action than something memorized in the classical sense. Motor imagery is just like action. When you are preparing to act or when you are about to start moving, you don't need to remember how you did it last time.
SG: There are cases, for example, in athletics, where trainers will advise the person to actually think through the action before doing it, and that tends to improve the action. On the other hand you have the idea that consciousness of the action interferes with the action. Is that a difference between anticipating what you will do, and being aware of it in the act?
MJ: People distinguish between being in the first-person
situation or being in the third-person situation. Being in the third-person
situation means that you are looking at yourself doing something, you are
considering yourself as another person, from an external perspective.
In the first-person situation, you are using a different vantage point
where you use your internal feelings …
These two situations are not equivalent: in one case you look at someone, and you are conscious of what you see them do. In the other case you are not looking at yourself, you are feeling yourself from within. The third-person consciousness is treating oneself as if from the perspective of another person, an observer.
2. Schemas and Networks
SG: You have shifted the kind of analysis you are doing away from motor schemas, but do you also want to give this notion up as a way of explaining representations? And if so, you would still want some way to describe the levels that are underneath representation. What would you substitute for the notion of schema in this regard?
MJ: I think that the best way to think about this is in terms of networks. In order to implement an intention, for example, a specific neural network will be assembled, and this network will appear as a network of activation if you use neuro-imaging techniques. What has changed to motivate the theoretical shift from schemas to networks is that you never see the schemas, but you see the networks by using this technology. It is much more convenient to think of an assemblage of brain areas which will compose the network for imagining an action, or for building an intention, and so forth, because you really see these networks and then you can make sense of those ensembles. You can also elaborate further on possible overlap between networks, on possible distinctions between networks, and so forth. It is still not the final answer, of course, because those networks as we see them are static. The brain image is a snapshot of the brain at a certain time, during a certain task. What I would like now to have is a dynamic view of networks. If I could get something like this, then I think that networks would be the best answer to your question.
It is possible that something like schemas exist in each area which is recruited by the network, but I think they are no longer useful. The schema concept was interesting because it offered the possibility to concatenate different elementary schemas into a boarder schema. But now I think the network idea can do this as well.
SG: Moving in that direction would make the problem of the explanatory gap even more clear. Because with schemas you had something of the same order as representations, I suspect. Semantics or intentionality was already in the schema, whereas if we refer to neural networks, we are talking about patterns of neurons firing, and there is a bigger jump to the level of representation. Is this right?
MJ: Do you feel that by abandoning the level of schemas we lose something?
SG: Maybe not. On the one hand, perhaps we can think of schemas simply as one interpretation of neural networks -- a level of interpretation rather than a level of actual processes. On the other hand, maybe all the talk about schemas is simply covering up the problem. Maybe, by giving up schemas, we make what we are doing more clear, or make the problem more clear.
MJ: Yes, it may make it clear that we will never succeed in bridging the gap !
SG: So whereas schemas seemed to give us something to talk about between the level of neurons firing and the level of representation, if we do away with schemas, we get a much more complex level at the bottom, but it still doesn't add up to the representation. Unless you decide that the representation is nothing more than the neural network.
MJ: That's discouraging, but I think I agree. But you know that the problem of levels of explanation is still open. As a neuroscientist I will try to understand how the networks are built, or how they can implement a representation, and that's it. Of course I agree that someone else should take over the explanation of the cognitive stage. I do it from time to time.
SG: And why not.
MJ: The philosophers may be mad that I do it!
SG: But no one really knows how to solve that problem, so this is still a problem for everybody. And again, reference to schemas may lead us to think that we have an explanation where we really don't. So if you take away the schemas, and look, we see that the gap is still there, and it's not going to be filled up by schemas.
MJ: It is true that the recursive property of the schemas was a way of getting closer and closer to the neuronal level, starting from the higher level of explanation. That was the message of Arbib and the philosopher Mary Hesse in their Gifford lectures (Arbib and Hesse, 1986). In their book, they explain how the schema theory can go from bottom up to explain the conceptual level.
SG: That is still a model. One would use the vocabulary of schemas, or a different vocabulary, to formulate a cognitive model. In contrast, what you are talking about now, you can take a picture of it. The network is there in the brain, and your work is much more empirically informed.
SG: So it is something that is actually there. It's not a model so much as what is actually happening -- although, of course, one needs a model to explain what we are seeing in the brain.
MJ: Yes. It is actually happening and it makes sense out of all that we know from previous studies in neurology, monkey studies, and so on, about distribution of functions in the brain. And connections between different areas. Now is there anything else to know once you have seen the network?
SG: Yes, then you have to work your way up and your way down. What is the nature of the representation? One way to explain the representation was by referring to the hierarchical integration of schemas on an integrated level. But now if you don't have the schemas, then what is the nature of the representation? Is representation just taking the place of the schemas?
MJ: No. Perhaps it takes over part of the role of the schemas, but not everything. Now we have to find another explanation for connecting the networks into cognitive states. The schemas provided that, but in a way that was not very realistic, neurophysiologically. Even though the schemas provided the possibility of decomposing everything down to the single neuron level, and then up to the highest level, still that was a little metaphorical. In contrast, with the networks, especially if we get to the state where we see them dynamically and can see the circulation of information, this will become certainly a much better background. That's my hope.
SG: Even if we cannot solve the hard problem, one still needs models to continue the work and to explain what one is doing. So whatever model works best to explain the data that you are generating in your experiments, that would be the state of the art.
MJ: There is an attempt, mostly by neurophysiologists, to understand the dynamics of the networks. This includes all this work (with electromagnetic recording techniques) showing the synchronization of different brain areas. Here in Lyon there is a group working on olfactory memory. They have shown that when a rat is presented with an odor that it can recognize, then a synchronization between the olfactory bulb and the upper level of the olfactory cortex appears. This does not happen if the rat is presented with an odor that it had not experienced before. This is one of the first demonstrations that this synchronization between areas becomes effective when there is something else than a simple sensory stimulation. In this case, there is a sensory stimulation, plus storage, recognition, memory of information. So that might explain how the different areas that compose the network communicate with each other. That will not change the fundamental problem, however, since this is still the network level.
SG: Of course, at some point you have to name certain aspects of these neural networks, patterns or connections that constitute a restricted network. So why not use the term 'schema' to name some particular pattern that seems to correspond with a behavior. That would not be the same concept of schema that your were using before.
MJ: No. And there is the problem of the temporal structure of nervous activity. There are no real attempts to conceive the temporal structure in the schema. The schema is a static thing, ready to be used. You take one schema, and then another, and another, and that adds up to an assembly or a larger schema.
SG: So what is missing is a concept of the dynamic schema.
MJ: Right. If the schemas become dynamic, as the networks begin to be, then okay, why not come back to the schema theory and try to relate schemas to the networks. Anyway, with the networks we first have to go underneath to get to see how they come to have vectors, how they change, how to understand the mechanics of the system. Of course, we also have to go higher up to understand how this network has been constructed at a certain time, and for a certain task.
SG: And also to see how one can have an effect coming from the higher level down, how an intention might activate a network. Is it a two-way causality?
MJ: Yes, it is a two-way causality. And that's as far as we can go right now. We have this new concept, we have the tools to see the networks, to try to understand them. Also we have new paradigms, and this is an important point. The networks, as they can be seen through brain imaging, are associated with cognitive paradigms which are drawn from psychology, neuropsychology, and so on. And these are much richer than the sensory-motor paradigms that we used during the years of classical neuroscience. Now I think, having the new tools, having the new paradigms, having the new concept, this is a good time for moving forward.
SG: So that is the contribution of the cognitive sciences, insofar as they work together.
MJ: Yes, exactly. It was hard to put all
the forces on one single problem. Now we can call together the psychologists,
the people with the imaging tools, those working in clinical environment,
the model makers, and put them on one single problem : for example, why
schizophrenics do not understand their own movements, and so on.
Then we can actually get new ideas. I not only think that this is something
that can be developed due to the concept of cognitive science, but I would
suggest that this is only possible in places where all these people are
at the same place at the same time. There has been, especially in
the States I think, big efforts to build cognitive science institutes that
were dispatched in different buildings, and sometimes on different campuses.
But in this institute in Lyon we try to have all of the specialists in
one place, working together, making teams, and working on common projects.
I think this is something that might work better and faster than the old
system. This is my propaganda for this place.
3. The importance of intentionality
SG: Let me ask for a clarification. At one point in your book (Jeannerod, 1997) you said that the representation (or the neuronal activities that constitute the representation) of an action should not be influenced by the presence or absence of the actual target (p. 165). Can you explain why that is the case? If I am going to reach for something, isn't the presence of the target somehow included in the representation?
MJ: Not necessarily. The fact that you have a representation allows you to reach the target, even if you don't see it. I think that a movement does not have to be permanently guided by the real target ; a virtual target can do the job. This statement means that the action can go on even if the target disappears. This can be the case in simple actions like grasping. We have seen already the case of actions which are executed after the target disappears : in the case of delayed execution the representation shows a working memory capacity, it can retain information. This is no in contradiction with the notion of short-lived representations in automatic movements.
SG: Do you not make adjustments to the representation when the target moves? (Is this question useful ?)
MJ: I can explicitly refer to some experiments with monkeys, where there is a waiting period between the moment where the monkey is shown a target and the moment where it executes the reach. During this period, the activation of single parietal neurons, which encode the direction of the movement, is maintained as long as the movement is not executed. This sustained activity is, to some extent, the representation of the movement. It will not go on forever, but still it can be maintained for several seconds.
SG: When the original representation is formed it depends on seeing an object. But then there is an absolute independence between representation and object?
MJ: Well, this is not exactly true, because you may form a representation with no immediate reference to a stimulus from the outside.
SG: You would have to imagine a target.
MJ: You form an intention to do something with an object, except that this object can be a purely mental thing. You form an intention with the belief that something will happen if you do a certain action. It's the goal.
SG: So when a representation is called upon for
execution, the target doesn't have to be there. But in the original
formulation of the representation, there must be some stimulus, although
the stimulus may be an imaginary target, or it may be an intention to do
I think that your work shows the importance of the intentional level. Specifically, you show that the goal or the intention of my action will really determine the motor specifications of the action. You suggest that goal-directedness is a primary constituent of action (Jeannerod, in press). This means, I think, that the motor system works at the highest pragmatic level of description. In other words, the motor system is not simply a mechanism that organizes itself in terms of what muscles need to be moved, but it organizes itself around intentions. It designs the reaching and grasping differently if the intention is to take a drink from the glass rather than to pick it up and throw it at someone. And it is at that level of pragmatic intention that the system forms the representation. The representation is cast in terms of the intention or the goal. Moreover, this is something real, in the sense that it is not just that there are various levels of description that you could use to describe what is happening -- although there are indeed different levels of description. Rather, the motor system is actually keyed into the intentional level. Is this a good interpretation of what your work shows?
MJ: Yes. What I initially liked in the Arbib's
schema theory (e.g., Arbib, 1985) is that there was a representation or
schema for every level, from the single finger movement level up to the
action level which embedded lower level schemas, and so on and so forth.
At the top you had the schema for the whole action, for example, getting
something to drink. So, in order to drink you activated schemas to
get to the kitchen; then you activated schemas to grasp the glass, to raise
it to the mouth, and so on and so forth: for each sub-action you had other
sub-sub-actions. That was the organizing idea of going from the higher
level down to the lower one, a hierarchical organization.
What we want to have in a representation is not only the vocabulary to be assembled for producing the action (this is the static conception of the schema theory). Instead, we need the functional rules for assemblage, including the biomechanical constraints, the spatial reference frame, the initial positions, the forces to apply, etc. All these aspects form the covert part of the representation : they are present in the representation as can be demonstrated in experiments with implicit motor images of the sort that were mentioned earlier (e.g., Frak et al, Parsons et al), but they cannot be accessed consciously. The conscious part of the representation doesn't really have to include all the technicalities of the action, it just specifies the goal. But, interestingly, even though you imagine the action in terms of its goal, in simulating it you also rehearse all the neuronal circuitry. As we said before, if you examine the brain activity during motor imagination, you will find activation of the motor cortex, the cerebellum, etc. Even though the subject is imagining a complex goal, you will observe activation in the executive areas of his brain, corresponding to motor functions which he cannot figure out in his conscious experience of the image.
SG: So the level of intention carries with it all the other levels, as if they were entrained by the intention.
SG: So that would be why it may be quite easy for the motor image to capture all of that, to be framed in a general way, but to carry with it all of the motor details. This importance on the intention ties into what you call the "paradigm-dependent response" (1997, p. 16), or what others might call the context-dependent response. The same stimulus might elicit different responses depending on the situation. The shaping of the grasp will depend on the intention, and simple actions are embedded in much more complex actions.
MJ: Yes. Complex goals or complex situations. This notion of context-dependent response could account for some of the Melvin Goodale's findings on responses to optical illusions (see Aglioti, DeSouza, and Goodale, 1995). Take the Titchener illusion, for example. If you look at it, you will see one of the two disks at the center of the image larger than the other, because it is surrounded by smaller circles. But if you are to grasp one of those disks, you will adjust the grasp to its real size. So, when you simply look at the disks, your estimate is influenced by the visual context, and when grasp one of them you focus on the goal of the movement, irrespective of the context which becomes irrelevant.
SG: The context that guides the movement is pragmatic.
MJ: In that case, yes.
SG: I think that you in fact say that in the case of apraxic patients, their movement improves in more contextualized situations. This is also something you find in Anthony Marcel's experiments where an apraxic patient will do much better at a particular movement if it is contextualized in some meaningful situation that they can understand. Or they do even better, Marcel says, in situations where they might have to do something with social significance. A movement that is mechanically equivalent is impossible for them in an experimental situation, but becomes possible for them in a social situation (Marcel, 1992).
MJ: Yes, this is a common finding in neuropsychology, following the classical observations by Hughlings Jackson. There are things that patients with cortical lesions find impossible to do if you simply ask them to do : for example, pronounce a particular word. By contrast, this same word will be automatically uttered in the natural context of the use of this word. This ‘automatic/voluntary dissociation', as it is called, is also observed in apraxic patients. If you ask them to pantomime an action like combing their hair or brushing their teeth, they will scratch their head or do funny things. But if you give them a tooth brush they will be able to perform the correct action. The idea, put forward by Marcel, that these patients are improved when the action becomes embedded in a context seems to represent a good explanation to this phenomenon. An alternative explanation is that executing an action upon request, or pantomiming that action (things that an apraxic patient cannot do) implies a controlled execution, by contrast with using an object in a normal situation, which implies an automatic execution. This difference is a rather radical one, if one considers that the neural systems involved in initiating the action in these two situations might not be the same, with the implication that, in apraxic patients, the neural system for initiating a controlled action might be damaged.
SG: You make a distinction between the pragmatic and semantic representations for action (Jeannerod, 1994; 1997, p. 77), which is independent of the anatomical distinction between dorsal and ventral systems. The pragmatic representation refers to the rapid transformation between sensory input and motor commands. The semantic representation refers to the use of cognitive cues for generating actions. In the contextualized situation there is more meaning for the subject.
MJ: In a certain way, yes. But let's leave aside the distinction between semantic and pragmatic for a minute. Just take the dichotomy between the ventral and the dorsal visual systems. The dorsal route has been assigned a function in generating automatic execution. That would be the way by which an apraxic could correctly perform a movement that is embedded into a sequence and a broader whole. In contrast, the request that is made to the patient to purposively do this movement detached from any other context, would be where the person would have to do something that is detached from this automatic route. In my 1997 book I am a little reluctant to map the pragmatic and semantic distinction on the dorsal and ventral systems, respectively. There are several examples where you may find signs of perceptual, conscious manipulation of information in the dorsal system. We demonstrated this in a PET experiment where subjects were instructed to compare the shape, size or orientation of visual stimuli, without any movement involved: we found a beautiful focus of activation in the posterior parietal cortex, in addition to that we expected to find in the inferior temporal cortex (Faillenot et al, 1999). So, the processing of semantic information uses the resources from both pathways, it's not located exclusively in the ventral system. I have reservations about these distinctions, but it is not a serious discrepancy between Goodale and me. It is just that I would like to keep the semantic representation for action on objects free from a rigid anatomical assignment. I am very reluctant to say that we have one part of our brain which works with consciousness, and that consciousnes pertains to a specific area (the ventral system), while the dorsal part would work automatically without consciousness. My distinction between pragmatic and semantic modes of processing does not compete with the model of Goodale and Milner.
SG: While we are talking about these distinctions, there is an older distinction, and I wonder what you think of it. I mean Goldstein's (1940) distinction between abstract and concrete. Again it seems to involve contextualization. So a patient is unable to do something abstractly, for example, pick up this thing, or touch their nose, but they are able to do it in a very pragmatic situation. Does that in any way correspond to either ventral versus dorsal, or semantic versus pragmatic? Or do these distinctions cut across each other?
MJ: All these dichotomies, like the automatic versus controlled distinction that I mentioned earlier, they all refer to the same idea, although they probably don't overlap completely, as they each have their own field of application. The distinction between concrete versus abstract behavior includes some of the same things as pragmatic versus semantic representation, for example. But it also carries another meaning, which is the fact that a patient with a brain lesion will not be able to perform an action detached from its context. Goldstein makes the distinction in the context of pathology. We are back to the old Jackson’s automatic/voluntary dissociation.
SG: Yes, so does it exist normally?
MJ: Does it exist normally? Well, probably controlled versus automatic is a better way to describe things. In controlled action, you know what you are doing, you can make judgements on what you do. I agree, however, that it doesn't completely overlap with abstract versus concrete. Do you have an example of what it would be to be abstract in normal behavior?
SG: Well, for example, in an experimental situation if I am asked to make a certain movement that is without meaning, I would say that this is without pragmatic significance, it has no natural context for me to frame its meaning. It would be abstract. The distinction has to do with meaning or the lack of meaning.
MJ: Yes, in that sense I agree, like in the apraxic patients who cannot do something that you ask them to do outside of a purpose or context. But now I don't think that anyone uses these abstract-concrete terms any more. I don't see them, certainly not in the French or the German neurologists.
SG: You are right, of course. Goldstein also distinguished between pointing and grasping, not only as two different actions, but as two different kinds of actions (1931). You indicate at one point in your text that pointing is a pure motor task. I wonder if it is a pure motor task, and I'm thinking of what we might call expressive movement in contrast to instrumental movement. In regard to pointing, one doesn't point unless someone else is around, and this is also a social action.
MJ: We had exactly the same discussion with Pierre Jacob recently when we were discussing this issue of automatic versus controlled movements, dorsal versus ventral systems, and so on. Indeed, pointing to a target can be a purely automatic, target oriented movement. But we know that it can also be something else, with an expressive meaning, in the context in which you show something to someone for example. This is what people call deictic pointing : it is a pointing that indicates a direction, that shows to someone where to go, or who is there. The other type of pointing movement has been used for fifty years by psychophysicists to study visuomotor transformation. For example, when we have to press a switch, or when we have to point on a touch screen during an experiment. One of the differences is between touching and not touching. If you point without touching, that is an expressive movement, whereas if you point to touch, that is an instrumental movement, and that is a pure visuomotor act.
SG: In the case of a pointing that ends up touching, which aspect has priority: am I pointing to indicate, or am I pointing, which is reaching, to touch?
MJ: Pointing to touch. There is a classic paper by Willem Levelt, who is known as a psycholinguist, who popularized the distinction between deictic pointing and visuomotor pointing (Levelt, Richardson and La Heij, 1985). He used exactly that distinction.
SG: Yes, I see now how you can say that pointing in that sense is a purely visual-motor task.
MJ: One more elaboration on this point. You may know the work in monkeys by neurophysiologists like Emilio Bizzi or, more recently by Apostolos Georgopoulos. They are studying what they call pointing in monkeys. In fact they train the monkey to use a lever and to move it in front of a visual target, but without touching the target. They call this pointing, which consists in adjusting the position of the hand to that of a visual target. This is another case of pointing as a purely visual-motor task.
SG: I want to go back to the semantic versus pragmatic distinction. You discuss an experiment that involves having people drive through gates of different widths. As the gates get narrower, the driver will slow down, and almost come to a stop, even when the driver knows that the car will fit. So the greater the accuracy required for driving through the gate, the slower the velocity.
MJ: In fact this experiment was done with Jean Decety, in a virtual setup (Decety and Jeannerod, 1996). Imagine that you are walking through a gate. We showed the subjects gates of different apparent widths placed at different apparent distances in a virtual environment, so that the subjects had no contact with reality. What they saw were virtual gates, which appeared at different apparent distances with different apparent widths. They were asked to mentally walk through the gates as fast as possible and to report the time at which they crossed the gates.
In fact, mentally walking through a narrow gate
placed at a relatively further distance took a longer time for the subject
than walking through a wider gate. The times reported by the subjects followed
the Fitts' Law (2).
This an interesting result because Fitts' Law has been demonstrated in
real movements performed in visual-motor situations, but in this case we
showed that it was still present in mentally simulated actions. We realized
that this is indeed true in real life situations. When you drive your car
into your garage, for example, you behave in the same way : you go more
slowly when the gate is narrow, and you drive faster when the gate is much
wider than the car. What we learned in this experiment is that this property
is retained even in a mentally simulated action. Paul Fitts considered
this effect as a purely visuomotor effect, which he related to the limited
capacity of the visual channel. When the difficulty of the task increases
(i.e., when the amount of information to be processed increases), he said,
the movement had to be slowed down to preserve accuracy: to him, this was
an automatic process. Now, we find it in a situation where the action is
simulated, which means that the capacity of the visual channel and the
difficulty of the task are also part of the central representation which
guides the action. Gibson would have never thought of that.
4. Spatial frames of reference
SG: Reaching and grasping are two different kinds of movement. You show that reaching is dependent on an egocentric or body-centric spatial framework, whereas grasping tends to be more allocentric or object centered.
MJ: Well, I've changed my mind on this distinction. Initially, the idea was that grasping is independent of the position of the object and that a grasping movement was performed in the same way whether the object was here or there. I believed that specifically because the Sakata group in Tokyo had recorded neurons in the monkey parietal cortex which encoded the shape of an object to be grasped by the animal. They clearly stated in one of their papers that the neuron activity, which was specific for a particular object and for the corresponding type of grip that the monkey made to grasp it, was independent from the position of the object in space. That seemed a good argument to say that these types of distal movements (grasping) were under the control of an allocentric framework, i.e., unrelated to the relative positions of the object and the animal’s body. Since then, with my colleagues here in Lyon, we have studied the action of grasping an object located at different positions in the work field. We found that, although the shape of the hand itself was constant for all positions, the shape of the whole arm changed from one position to another. We interpreted this result by saying that grasping is not a purely distal phenomenon, it also involves the proximal segments of the upper limb : the successful grasp of an object also involves an efficient posture of the arm. And when we looked at the invariant features of all these configurations of the arm across the different spatial positions of the object, we found that the opposition axis (i.e., the position of the two fingers on the object surface) kept a fixed orientation with respect to the body axis. This means to me that the final position of the fingers on the object is coded egocentrically.
In this experiment (Paulignan et al, 1997), we used an object (a cylinder) where the fingers could be placed at any position on its surface. Of course, things may become different if the object has a complex shape which affords fixed finger positions. In that case, there must be a tradeoff between the biomechanical requirements of the arm and the requirements introduced by the object shape. Just by looking at people in such situations, I have the feeling that we prefer to change our body position with respect to the object, rather than staying at the same place and using awkward arm postures. This is an indication that the organization of the distal part of the movement (the finger grip) is not independent from the organization of the proximal part of the movement (the reach).
SG: So all of these movements would be body centered, although the part of the body on which a particular movement is centered might be different from another movement. So there would be many different controlling points on the body framework.
MJ: The trouble with studies of visual-motor transformation is that they deal with the hand movement before the object is touched. These movements are organized in a body-centered framework. But visuo-motor transformation is a precondition for manipulation. Manipulation is no longer referred to the body : you can manipulate an object in your pocket, or you can manipulate it on the table, and manipulation would be more or less the same. In that case the center of reference of the movements is the object itself
SG: Doesn't intention enter into it again? The intention that I have, what I intend to do with the object, will define how I will actually grasp it.
MJ: The relation of intention to the frame of reference is an interesting point. What the shape of the fingers encode during the visuo-motor phase of the movement is the geometrical properties of the object, like its size or its orientation. By contrast, what is coded when you recognize or describe an object are its pictorial attributes. The difference between the geometrical attributes which trigger finger shape, and the pictorial attributes which characterize the object independently from any action to it, is the difference between the pragmatic and the semantic modes of operation of the system. In the pragmatic system, after all, you only need some crude coding of object shape. Accessing to the meaning of an object requires much more than its geometrical attributes. This means that the content of the intention (or the representation) for pragmatic processing or for semantic processing, respectively, must be very different.
SG: In your book, you did say that even if the pragmatic and the semantic are two distinct systems, they are tightly coordinated.
MJ: Yes, it is clearly demonstrated that connections, both anatomical and conceptual, exist between these two modes of functioning.
5. The sense of effort and free will
SG: I would like to explore another set of topics that concern what you call an "effort of the will." In your book you talk about the effort of the will as related to a sense of heaviness of the limb.
MJ: You are referring to a particular situation where the experiment involves modified conditions of the limb, such as partial paralysis or fatigue (e.g., McCloskey, Ebeling, and Goodwin, 1974). Imagine that you have one arm partially paralyzed or fatigued, and you are asked to raise a weight with that arm (the reference weight). Then, by using the other, normal arm, you are asked to select a weight that matches the reference weight : the weight selected by the normal arm will be heavier than the reference weight. This means that, in selecting a weight, you refer not to the real reference weight, but to the effort that you have to put into lifting it. Because your arm is partially paralyzed or fatigued, you have to send an increased motor command to lift the reference weight, and you will read this as an increased weight. You need more motor commands to recruit more muscle units, because they have less force.
SG: So the state of the muscles determines the phenomenology. Has the quantity of motor commands actually been measured ?
MJ: Yes, it can be measured in these experiments, which I find extremely clever.
SG: In this regard I was puzzled because I thought that the sensation of the heaviness involved was due to peripheral feedback, whereas, if it is dependent on a quantity of motor commands, it is not really peripheral feedback, is it?
MJ: Right. When you lift something in the case of fatigue or partial paralysis, the illusion of an increased weight is due to increased motor commands.
SG: Does that mean you are activating more muscle?
MJ: More motor units. You are recruiting more muscle.
SG: So you would be using more muscle commands to accomplish the same thing that you could accomplish with less muscle when not fatigued or partially paralyzed.
MJ: In normal life, you calibrate the muscle command based on visual cues or cognitive cues -- you know that this particular object is heavy. Incidentally, there is an interesting example that was chosen by Freud to illustrate the idea of empathy. This is in his book on jokes (Freud, 1905). Freud tries to explain why we laugh when we see a clown or someone who is pretending to make an enormous effort to lift an apparently heavy object, and then falls on his back. We laugh because we have created within ourselves an expectation by simulating the effort of the clown, and we see something that is very different from the expectation. The effect we see is at discrepancy with respect to our internal model, and this is the source of comedy. The simulation of the action we observe does not meet the expectation. I take this as a proof for the simulation theory.
SG: Was Freud a simulation theorist?
MJ: Yes, he was.
SG: A motor theory of comedy! The idea of a sense of effort and discharges in the motor system reminds one of Libet's experiments and how they tie into the question of free will. I think you cite his experiments.
MJ: Yes, I like them very much. If one looks in great detail, as Libet has done, at the timing of execution of a voluntary movement, the movement preparation begin 300 or 400 milleseconds prior to the consciousness that you have of it. This duration fits quite well with what we found in our experiment with Castiello (Castiello and Jeannerod, 1991; Castiello, Paulignan and Jeannerod, 1991) where subjects had to simultaneously reach for object which suddenly changed its position, and to tell us when they noticed the change. The conscious awareness of the change lagged behind the motor correction to the change by about 350 milliseconds. This means that one can initiate an action non consciously and become aware later, as we illustrated earlier with your snake anecdote. Of course, what remains unsolved in these experiments is the theoretical issue: how can it be that the brain decides before me?
SG: Right. People say this has to do with free will. But consciousness comes back into it and qualifies what happens unconsciously.
MJ: And in fact this is what Libet tends to say.
SG: Let's go back to something we were talking about earlier, the idea that consciousness is slower than some movement; that some movement is so fast that our consciousness has to play catch up. The experiments with Castiello, for example, show that a subject's motor system will have already made proper adjustments to a target that unexpectedly moves, and these motor adjustments occur prior to the subject's awareness of the movement. In summarizing the results of these experiments you make the following statement (Jeannerod, 1997, pp. 86-87). "The fact that the delay of a visual stimulus remained invariant, whereas the time to the motor response was modulated as a function of the type of task (correcting for a spatial displacement or for a change in object size), reveals that awareness does not depend on a given particular neural system to appear. Instead it is an attribute related to particular behavioral strategies."
MJ: I was comparing two experiments. In one, the target is displaced at the time where you start moving to it. Your motor system makes a fast adjustment and you correctly grasp the target before being aware of the change. In the second experiment, the size, not the position, of the target is changed at movement onset (we had a system where an object could be suddenly made to appear larger). In this case, the shape of the finger grip has to be changed in time for making a correct grasp. Instead of seeing very fast corrections as we saw for the changes in object position, we found late corrections in grip size, the timing of which came close to the time of consciousness. This is because the timing of corrections for the grasp is much slower than for the reach. The important point is that, although the time to corrections may change according to the type of perturbation, the time to consciousness is invariant.
SG: So there is a delay for the subjective awareness of the change in visual stimulus, and that delay remains invariant across the two situations.
MJ: It remains invariant. Whether it is a change in position or a change in size, it will always take more or less the same time to become aware of that..
SG: Whereas the time to the motor response …
MJ: … which is either the time to the adjustment of the reach or to the change in grip size, will be different. The motor system will have to execute very different types of movements in the two situations
SG: The time to that is modulated as a function of the task. That's fine. This reveals that "awareness does not depend on a given particular neural system to appear"?
MJ: Now I understand your puzzle.
SG: I think I read the emphasis to be on a particular neural system in order to appear, and you mean that it is consistent across both of those experimental cases. But then you conclude, "Instead it is an attribute of a particular behavioral strategy."
MJ: This is only partly true. Awareness can be
shown to depend on the behavioral strategy when you are trying to isolate
automatic actions from other aspects of your behavior, mostly in experimental
situations. In everyday life, you have a constant flow of consciousness
because automatic and controlled strategies, perception and action, etc,
always go together.
6. Social Neuroscience
SG: Many theorists today make reference to mirror neurons in many different contexts, for example, in explanations of language development, neonate imitation, and of how we understand others. Do you have any reservations about their wide theoretical use across all of these different contexts?
MJ: I am not working on monkeys myself. All that I know on mirror neurons is from Rizzolatti’s work. Working in man, I think that this is a useful concept, but I don’t see it limited to that particular group of neurons in the ventral premotor cortex. What you see in man is a large neural network which is activated during action observation. Thus, the idea that you get about action observation becomes very different from what you get by looking at the brain neuron by neuron. I told this to Rizzolatti and asked him why he didn't look for mirror neurons in other brain areas in the monkey. What I mean is that, after all, premotor neurons don’t tell us the whole story.
SG: Gallese talks about a mirror system.
MJ: Maybe it is a consequence of my advice? (laughter). Good, because it is what you would see in man and what you would probably see in the monkey as well, if you studied the whole brain instead of just one point. Of course, we cannot forget that the concept of mirror neurons is still very critical. It indicates that in at least one point of the brain you have the same system which is key to the relation between observing and acting.
SG: And imagining movement?
MJ: Well in man you have the same areas which are activated during doing, imagining and observing. The question remains of the degree of overlap between activation of these areas in the three conditions. With the mirror neurons in premotor cortex, we know for sure that there is at least one point in the brain where the overlap is complete for acting and observing.
SG: I know that you and a number of people here in Lyon are interested in simulation theory, and as you mentioned, you have written about it (Jeannerod, 2001, in press, Jeannerod and Frak, 1999). Also, you, Decety, and various colleagues have done some work identifying brain areas that are activated not only when the subject performs an action, but also when the subject observes the action of another (Decety et al., 1997; Decety et al., 1994; Grezes and Decety, 2001). As your team here has discovered, the very same brain areas are activated when the subject simulates the observed action, that is, when the subject imagines himself doing the action that he has observed. So there is an overlap of functions, mapped onto the same brain areas. Does this neurological evidence support simulation theory?
MJ: Well, let me show you a diagram we are
working with (Figure 1). This represents a motor cognitive situation
with two people. We have agent A and agent B. The processes
diagramed are represented as happening in the brain. They are based
on the idea of an overlap between neural representations that you make
when you observe an action or when you think of an action. Let us take
an example : agent A generates a representation of a self-generated action,
a motor intention. If this comes to execution, this will become a
signal for agent B, such that agent B will form a representation of the
action that he sees, and will simulate it. Agent B will make an estimate
of the social consequences of the action he sees and will possibly
change his beliefs about agent A. And then you have a cycle where
these two representations interact with each other, and then the two agents.
I am answering the question of "Who?", that is, I am attributing certain
actions to another person, and certain actions to myself. In fact
the two representations, within an individual subject, are close from each
other and partly overlap. Determining who is acting, myself or the other,
will be based on the non-overlapping part. In pathological conditions,
if the overlapping area becomes greater, as this might happen in schizophrenia
(on my interpretation), then you have no way of knowing who is generating
This diagram represents the case where two people interact. Another question would be about what happens when you imagine an action from your own and at the same time you observe an action from somebody else. How doers the diagram works in this situation ?
SG: Sarah Blakemore, following her work with Christopher Frith, wants to use the forward model to explain how we can predict what the other person will do (see Blakemore and Decety, 2001). That is, we would take the representation of the observed action and run it through a simulation routine that includes a forward model of where this action is heading. That is the same mechanism that allows me to anticipate a certain outcome to my motor commands would also help me to anticipate where the other person's actions are going.
MJ: That is a good idea, and in fact the diagram I showed you represents a forward model. This is what the estimation or prediction of consequences represents.
SG: So by "estimation of social consequences" you mean a prediction of what the other person will do?
SG: In this model, is there some point at which the representation of action becomes a motor image, I mean an explicit or conscious event?
MJ: Well there is something that this model doesn't say. It is that probably you don't need to go through the executed action to get the social signal. You are able to disentangle intentions or states of mind, even from covert actions.
SG: So you can anticipate an action, or you can discern an intention in someone else.
MJ: Yes, I mean that agent B may look at agent A and try to understand what his intentions are, using some form of mind reading. I think it is not necessary to execute an action to generate a signal to the other person. The signal may very well go through subliminal events like direction of gaze, posture, etc and it doesn't need to be transformed into an overt action to reach the other person.
SG: I entirely agree on this point. Let me add that, according to some simulation theorists, it is not necessary to be conscious of the simulation. The representations do not have to reach the point of overt or explicit consciousness. That leaves lots of room for misunderstanding.
MJ: Of course. In some cases it may be better to act out what we mean and avoid the misunderstanding.
SG: Let me say that before I came here I attended the British Society for Phenomenology meeting in Oxford. Some of the phenomenologists there were convinced that neuroscientists were only interested in explaining everything in terms of neurons, and that, on this view, consciousness itself is simply a product of neuronal processes, with no causal power of its own. But your work goes in the other direction in the sense that you show that a subject's intentions will in some degree shape or determine what neurons will fire. We discussed this earlier. But here you suggest further that it may be what other people do, and how we interpret their actions, that will determine how our neurons fire, and, of course, how we will act.
MJ: Yes. Here we are in the context of social
Aglioti, S., DeSouza, J. F. X. and Goodale, M. A. (1995), Size contrast illusions deceive the eye but not the hand. Current Biology, 5 (6), 679-85.
Arbib, M.A. (1985) Schemas for the temporal organization of behavior. Human Neurobiology, 4: 63-72.
Arbib, M.A., and M.B. Hesse, 1986, The Construction of Reality. Cambridge: Cambridge University Press
Blakemore, S-J. and Decety, J. 2001. From the perception of action to the understanding of Intention. Nature Reviews: Neuroscience 2: 561-67.
Castiello, U. and Jeannerod, M. (1991), 'Measuring time to awareness', Neuroreport, 2: 797-800.
Castiello, U., Paulignan, Y. and Jeannerod, M. (1991), 'Temporal dissociation of motor responses and subjective awareness: A study of normal subjects', Brain, 114: 2639-55.
Cole, J. D. 1995. Pride and a daily marathon. Cambridge, Massachusetts: MIT Press; originally London: Duckworth, 1991.
Decety, J. and Jeannerod, M. (1996), Fitts' law
in mentally simulated movements.
Behavioral Brain Research, 72: 127-36
Decety, J. Perani, D., Jeannerod, M., Bettinardi, V., Tadary, B., Woods, R., Mazziotta, J. C. and Fazio, F. 1994. Mapping motor representations with PET. Nature, 371: 600-02.
Decety, J., Grezes, J., Costes, N., Perani, D., Jeannerod, M., Procyk, E., Grassi, F. and Fazio, F. 1997. Brain activity during oberservation of actions: Influence of action content and subject's strategy. Brain, 120: 1763-77.
Faillenot, I. Decety, J. & Jeannerod, M. (1999) Human brain activity related to the perception of spatial features of objects. Neuroimage, 10, 114-124.
Fourneret, P. and Jeannerod, M. (1998), 'Limited conscious monitoring of motor performance in normal subjects', Neuropsychologia, 36: 1133-44.
Frak, V.G., Paulignan, Y. & Jeannerod, M. (2001) Orientation of the opposition axis in mentally simulated grasping. Experimental Brain Research, 136, 120-127.
Freud, S. 1905. Jokes and their relation to the unconscious. Trans. J. Strachey. New York: Norton, 1960.
Frith, C. D. 1992. The neuropsychology of schizophrenia. NJ: Lawrence Erlbaum.
Gallagher, S and J. Cole. 1995. Body Schema and Body Image in a Deafferented Subject, Journal of Mind and Behavior 16: 369-390.
Georgieff, N. & Marc Jeannerod. 1998. Beyond consciousness of external events: A Who system for consciousness of action and self-consciousness. Consciousness and Cognition, 7, 465-77.
Goldstein, K. 1931. Zeigen und Greifen. Nervenartzt.
Goldstein, K. 1940. Human Nature in the Light of Psychopathology. Cambridge: Harvard University Press; reprinted by Shocken Books: 1963.
Goodale, M. A. and Milner, A. D. 1992. Separate visual pathways for perception and action. Trends in Neuroscience, 15: 20-25.
Grezes, J. and Decety, J. (2001), 'Functional Anatomy of Execution, Mental Simulation, Observation, and Verb Generation of Actions: A Meta-Analysis', Human Brain Mapping, 12, 1–19.
Haggard, P. and Eimer, M. (1999). On the relation between brain potentials and the awareness of voluntary movements Experimental Brain Research 126, 128-33.
Jeannerod, M. 2001. Neural simulation of action: A unifying mechanism for motor cognition. Neuroimage (in press).
Jeannerod, M. (2001) Simulation of action as a unifying concept for motor cognition. In : Cognitive neuroscience. Perspectives on the problem of intention and action, S.H. Johnson (Ed). Cambridge, MA : MIT Press.
Jeannerod, M. 1997. The Cognitive Neuroscience of Action. Oxford: Blackwell.
Jeannerod, M. 1994. The representing brain: Neural correlates of motor intention and imagery. Behavioral and Brain Sciences 17: 187-245.
Jeannerod, M. and Frak, V. G. 1999. Mental simulation of action in human subjects. Current Opinion in Neurobiology, 9: 735-39.
Jeannerod, M. & Decety, J. (1990) The accuracy of visuomotor transformation. An investigation into the mechanisms of visual recognition of objects. In: Vision and action. The control of grasping. M. Goodale (Ed), Ablex, Norwood, pp. 33-48.
Levelt, WJM, Richardson, G., & La Heij, W. (1985), Pointing and voicing in deictic expressions. Journal of Memory and Language, 24, 133-164.
Libet, B., Gleason, C. A., Wright, E. W. and Perl, D. K. 1983. Time of conscious intention to act in relation to cerebral activities (readiness potential): The unconscious initiation of a freely voluntary act. Brain, 102: 193-224.
Marcel, Anthony J. 1992. The personal level in cognitive rehabilitation. In N. vonSteinbuchel, E. Poppel & D. von Cramon (Eds.) Neurophychological Rehabilitation . Berlin: Springer.
McCloskey, D. I., Ebeling, P., and Goodwin, G. M. (1974), 'Estimation of weights and tensions and apparent involvement of a "sense of effort",' Experimental Neurology, 42:220-32.
Parsons, L. M. 1994. Temporal and kinematic properties of motor behavior reflected in mentally simulated action. Journal of Experimental Psychology: Human Perception and Performance, 20: 709-730.
Paulignan, Y., Frak, V.G., Toni, I. & Jeannerod, M. (1997) Influence of object position and size on human prehension movements. Experimental Brain Research, 114, 226-234.
Slachewsky, A., Pillon, B., Fourneret, P. Pradat-Diehl,
Jeannerod, M. & Dubois, B. (2001) Preserved adjustment but impaired
awareness in a sensory-motor conflict following prefrontal lesions. Journal
of Cognitive Neuroscience, 13, 332-340.
1 I did the math. 40 mph is 211,200 feet per hour. That's 3520 feet per minute, or 58.66 feet per second. That's .059 feet per millesecond, or 20.5 feet per 350msec. So the disadvantage of consciously controlling your braking at 40mph is that a safety margin of about 20 feet would be lost.
2 Fitts's Law states: The time
to acquire a target is a function of the distance to and size of the target.