Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Visual Stimuli | 9 |
Foreign Countries | 6 |
Auditory Stimuli | 5 |
Cues | 4 |
Auditory Perception | 3 |
College Students | 3 |
Affective Behavior | 2 |
Bilingualism | 2 |
Emotional Response | 2 |
Experiments | 2 |
Intonation | 2 |
More ▼ |
Source
Language and Speech | 9 |
Author
Publication Type
Journal Articles | 9 |
Reports - Research | 5 |
Reports - Evaluative | 3 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 3 |
High Schools | 1 |
Audience
Location
Japan | 2 |
Connecticut | 1 |
Czech Republic | 1 |
France | 1 |
Germany | 1 |
Israel | 1 |
Netherlands | 1 |
United Kingdom (Nottingham) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Pilling, Michael; Thomas, Sharon – Language and Speech, 2011
Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…
Descriptors: Feedback (Response), Cues, Assistive Technology, Training Methods
Scharrer, Lisa; Christmann, Ursula; Knoll, Monja – Language and Speech, 2011
Previous research has shown that in different languages ironic speech is acoustically modulated compared to literal speech, and these modulations are assumed to aid the listener in the comprehension process by acting as cues that mark utterances as ironic. The present study was conducted to identify paraverbal features of German "ironic…
Descriptors: Cues, Vowels, Figurative Language, Criticism
Dachkovsky, Svetlana; Sandler, Wendy – Language and Speech, 2009
While visual signals that accompany spoken language serve to augment the communicative message, the same visual ingredients form the substance of the linguistic system in sign languages. This article provides an analysis of visual signals that comprise part of the intonational system of a sign language. The system is conveyed mainly by particular…
Descriptors: Sign Language, Intonation, Suprasegmentals, Visual Stimuli
Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc – Language and Speech, 2010
In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…
Descriptors: Cues, Affective Behavior, Indo European Languages, Speech Communication
Tamaoka, Katsuo; Makioka, Shogo – Language and Speech, 2009
The present study investigated the existence of a Japanese mental syllabary and units stored therein for speech production. Experiment 1 compared naming latencies between high and low initial mora frequencies using CVCVCV nonwords, indicating that nonwords with a high initial mora frequency were named faster than those with a low frequency initial…
Descriptors: Speech, Item Analysis, Word Frequency, Japanese
Ota, Mitsuhiko; Hartsuiker, Robert J.; Haywood, Sarah L. – Language and Speech, 2010
A visual semantic categorization task in English was performed by native English speakers (Experiment 1) and late bilinguals whose first language was Japanese (Experiment 2) or Spanish (Experiment 3). In the critical conditions, the target word was a homophone of a correct category exemplar (e.g., A BODY OF WATER-SEE; cf. SEA) or a word that…
Descriptors: Phonology, Semantics, Word Recognition, English (Second Language)
Rilliard, Albert; Shochi, Takaaki; Martin, Jean-Claude; Erickson, Donna; Auberge, Veronique – Language and Speech, 2009
Whereas several studies have explored the expression of emotions, little is known on how the visual and audio channels are combined during production of what we call the more controlled social affects, for example, "attitudinal" expressions. This article presents a perception study of the audovisual expression of 12 Japanese and 6 French…
Descriptors: Foreign Countries, Affective Behavior, Emotional Development, Emotional Response
Arvaniti, Amalia; Ladd, D. Robert; Mennen, Ineke – Language and Speech, 2006
This paper compares the production and perception of the rise-fall contour of contrastive statements and the final rise-fall part of polar questions in Greek. The results show that these superficially similar rise-falls exhibit fine phonetic differences in the alignment of tonal targets with the segmental string, and that these differences can be…
Descriptors: Young Adults, Phonetics, Auditory Perception, Native Speakers
Brancazio, Lawrence; Best, Catherine T.; Fowler, Carol A. – Language and Speech, 2006
We report four experiments designed to determine whether visual information affects judgments of acoustically-specified nonspeech events as well as speech events (the "McGurk effect"). Previous findings have shown only weak McGurk effects for nonspeech stimuli, whereas strong effects are found for consonants. We used click sounds that…
Descriptors: African Languages, Vowels, English, Comparative Analysis