The sensorimotor interface

Introduction

From the superior temporal sulcus we move up to the beginning of the dorsal pathway at the boundary of the temporal and parietal lobes near the Sylvian fissure, which Hickok & Poeppel refer to as the Spt:

_images/neuroling_Hickok07F1.png

Fig. 120 Hickok & Poeppel’s dual pathway model. [1]

The dorsal pathway maps phonological representations onto articulatory motor representations. Area Spt plays the pivotal role in this mapping of mediating between the sensory representation of the STS and the motor representations of frontal cortex, hence its label as a “sensorimotor interface”.

Properties of the Spt

With respect to the Spt in particular, Hickok & Poeppel note that posterior research has revealed that it becomes active by the perception and reproduction by humming of tonal sequences and even by reading written words.

They also adduce the first five pieces of supporting evidence below; we add the sixth:

  1. the ability to acquire new vocabulary

  2. disruption by lesions: conduction aphasia

  3. the disruptive effects of auditory feedback on speech production

  4. articulatory decline in late-onset deafness

  5. the basic neural mechanisms for phonological short-term memory

  6. the non-phonological residue of Wernicke’s aphasia: deficient self-monitoring

The ability to acquire new vocabulary

Hickok & Poeppel postulate that the ability to learn new vocabulary relies on initial guidance of the motor system by the auditory system. The auditory system supplies a representation which the motor system learns to articulate. This might involve feedforward processing whereby the sensory code for a speech sequence is translated into a vocal motor sequence, or monitoring by feedback mechanisms, or both. In any event, there must be an auditory code which the motor system can ‘read’. The creation of such audio-motor codes is the undertaking of the Spt.

This function of the Spt is just an extension of its role in the acquisition of language in children, so it does not really count as ‘new’ evidence. However, there is one subtlety of word learning that is not so easy foreseen from child acquisition. It is the possibility that unfamiliar, low-frequency or complex words might require more sensory guidance than familiar, high-frequency or simple words. The latter might actually become ‘automated’ as motor chunks that require little sensory input. In other words, as a word becomes familiar, motor planning might shift from a reliance on sensory supervision to autonomy from it.

This hypothesis is consistent with a large motor learning literature showing shifts in the mechanisms of motor control as a function of learning. It is also consistent with a form of aphasia that Hickok & Poeppel attribute to a lesion of the Spt, namely conduction aphasia.

Conduction aphasia

Consider the following exchanges between a clinician and a patient:

Clinician: Now, I want you to say some words after me. Say ‘boy’. Patient: Boy. Clinician: Home. Patient: Home. Clinician: Seventy-nine. Patient: Ninety-seven. No … sevinty-sine … siventy-nice…. Clinician: Let’s try another one. Say ‘refrigerator’. Patient: Frigilator … no? how about … frerigilator … no frigaliterlater … aahh! It’s all mixed up! (:cite:`Brookshire2003`, p. 158)

Clinician: Circus. Patient: It’s a kriskus. … No, that’s not right, but it’s near. … Sirsis. … No. … This is very strange that I can’t say this word.… How about kirsis? … No. … I’ll have to by that. Kriskus? For some reason I can’t say it right now. But I’m close. Kirsis? No … (:cite:`Brookshire2003`, p. 158)

Table 33 Conduction aphasia checklist

Symptom

Conduction aphasia

comprehension of spoken material

normal

segmental phonology

normal

word selection

normal

word semantics

normal

fluency (production of speech)

normal

production of writing

normal

use function words

normal

grammaticality

normal

repetition of what others say

impaired for complex or low-frequency words

conversational proficiency, e.g. turn taking

normal

concern about impairment

yes

concern about errors

yes

short-term retention & recall of verbal materials

(no evidence)

other

can’t write to dictation

:cite:`Rogalsky2015`

Lesion evidence for a sensorimotor dorsal stream. Damage to auditory-related regions in the left hemisphere often results in speech production deficits63, 84, demonstrating that sensory systems participate in motor speech. More specifically, damage to the left dorsal STG or the temporoparietal junction is associated with conduction aphasia, a syndrome that is characterized by good comprehension but frequent phonemic errors in speech production8, 85. Conduction aphasia has classically been considered to be a disconnection syndrome involving damage to the arcuate fasciculus. However, there is now good evidence that this syndrome results from cortical dysfunction86, 87. The production deficit is load-sensitive: errors are more likely on longer, lower-frequency words and verbatim repetition of strings of speech with little semantic constraint85, 88. Functionally, conduction aphasia has been characterized as a deficit in the ability to encode phonological information for production89.

We have suggested that conduction aphasia represents a disruption of the auditory–motor interface system6, 90, particularly at the segment sequence level. Comprehension of speech is preserved because the lesion does not disrupt ventral stream pathways and/or because right-hemisphere speech systems can compensate for disruption of left-hemisphere speech perception systems. Phonological errors occur because sensory representations of speech are prevented from providing online guidance of speech sound sequencing; this effect is most pronounced for longer, lower-frequency or novel words, because these words rely on sensory involvement to a greater extent than shorter, higher-frequency words, as discussed above. Directly relevant to this claim, a recent functional imaging study showed that activity in the region that is often affected in conduction aphasia is modulated by word length in a covert naming task91.

The effect of delayed auditory feedback on speech production

Delayed Auditory Feedback (DAF) records a person’s speech and then plays it back through his or her headphones a fraction of a second later. It is used to treat stuttering, but it can induce stuttering in non-stutters (like me). There is no picture of this, but there are videos!

Articulatory decline in late-onset deafness

Phonological working memory

:cite:`Murakami2015`

:cite:`Rogalsky2015`

Self-monitoring

:cite:`DeWitt2013`

Powerpoint and podcast

The next topic

The next topic is The posterior inferior frontal gyrus.

End notes


Last edited Aug 22, 2023