NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED659789
Record Type: Non-Journal
Publication Date: 2024
Pages: 152
Abstractor: As Provided
ISBN: 979-8-3840-2813-0
ISSN: N/A
EISSN: N/A
Cross-Modal Interactions between Neural Systems Underlying Language and Speech during Reading, Spoken Language, and Visual Speech Processing
Lillian Chang
ProQuest LLC, Ph.D. Dissertation, Georgetown University
Language is a unique skill used by humans to communicate thoughts and feelings. For most individuals, language is understood through different sensory modalities--for instance, hearing auditory words relies on audition, while perceiving visual speech via lipreading and reading written text both utilize the visual system. Although these different ways to perceive language are often discussed as distinct systems in the brain separated by modality, there are considerable cross-modal interactions between language modalities (e.g., learning to read new words by reading aloud, or the simultaneous audiovisual speech processing during face-to-face conversations). As a result, there must exist neural pathways that connect language systems separated by modality to reflect cross-modal processes. These integration pathways should exist even during uni-modal processing (i.e., during silent reading, hearing auditory words, and silent lipreading). To test hypotheses on cross-modal integration, for instance how regional activation and networks differ for stimuli in different language forms (i.e., written, auditory, and visual forms), studies would need to investigate not only responses in specific regions-of-interest (ROIs), but also delineate stimulus-driven connections between regions. Furthermore, it is crucial to examine cross-modal interactions beyond traditional language regions like the posterior superior temporal cortex (pSTC) and the inferior frontal cortex (IFC), as integration may occur in other areas. Traditional functional magnetic resonance imaging (fMRI) studies are also limited in the use of group ROIs, which do not account for the substantial variability in the locations of language-related regions nor accurately represent individual selectivity. To address these methodological concerns, the current dissertation leveraged data from several datasets to examine cross-modal interactions during reading, spoken language, and visual speech processing. Specifically, I used functional localizer scans to define language-ROIs for each individual subject and examined cross-modal coupling. In Chapter 2, I investigated cross-modal coupling between reading and spoken language processing by analyzing the activation and connectivity between ROIs during written and auditory word processing. Then in Chapter 3, I delineated how visual speech processing involves integration with pathways associated with spoken language processing. Altogether, the extensive findings from my dissertation provide a better understanding of cross-modal mechanisms underlying language processing in the brain. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://bibliotheek.ehb.be:2222/en-US/products/dissertations/individuals.shtml.]
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://bibliotheek.ehb.be:2222/en-US/products/dissertations/individuals.shtml
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: National Science Foundation (NSF), Division of Behavioral and Cognitive Sciences (BCS)
Authoring Institution: N/A
Grant or Contract Numbers: 1756313