NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)6
Since 2006 (last 20 years)19
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kadooka, Kellan; Franchak, John M. – Developmental Psychology, 2020
Visual attention in complex, dynamic scenes is attracted to locations that contain socially relevant features, such as faces, and to areas that are visually salient. Previous work suggests that there is a "global shift" over development such that observers increasingly attend to faces with age. However, no prior work has tested whether…
Descriptors: Infants, Child Development, Human Body, Visual Stimuli
Peer reviewed Peer reviewed
Direct linkDirect link
Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B. – Developmental Psychology, 2017
Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants "declines" over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested 2 opposing hypotheses about this observed…
Descriptors: Infants, Age Differences, Visual Perception, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…
Descriptors: American Sign Language, Eye Movements, Phonology, Visual Perception
Peer reviewed Peer reviewed
Direct linkDirect link
Friend, Margaret; Pace, Amy E. – Journal of Cognition and Development, 2016
From early in development, segmenting events unfolding in the world in meaningful ways renders input more manageable and facilitates interpretation and prediction. Yet, little is known about how children process action structure in events composed of multiple coarse-grained actions. More importantly, little is known about the time course of action…
Descriptors: Toddlers, Adults, Motion, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Burns, Patrick; Russell, James; Russell, Charlotte – Journal of Cognition and Development, 2016
It is usually accepted that the binding of what, where, and when is a central component of young children's and animals' nonconceptual episodic abilities. We argue that additionally binding self-in-past (what-where-when-"who") adds a crucial conceptual requirement, and we ask when it becomes possible and what its cognitive correlates…
Descriptors: Young Children, Memory, Visual Stimuli, Video Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Addyman, Caspar; Mareschal, Denis – Child Development, 2013
Two experiments demonstrate that 5-month-olds are sensitive to local redundancy in visual-temporal sequences. In Experiment 1, 20 infants saw 2 separate sequences of looming colored shapes that possessed the same elements but contrasting transitional probabilities. One sequence was random whereas the other was based on bigrams. Without any prior…
Descriptors: Infants, Infant Behavior, Visual Perception, Visual Stimuli
Peer reviewed Peer reviewed
Direct linkDirect link
Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl – Journal of Autism and Developmental Disorders, 2015
Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to…
Descriptors: Autism, Pervasive Developmental Disorders, Visual Perception, Developmental Delays
Peer reviewed Peer reviewed
Direct linkDirect link
Bluell, Alexandra M.; Montgomery, Derek E. – Journal of Cognition and Development, 2014
The day-night paradigm, where children respond to a pair of pictures with opposite labels for a series of trials, is a widely used measure of interference control. Recent research has shown that a happy-sad variant of the day-night task was significantly more difficult than the standard day-night task. The present research examined whether the…
Descriptors: Pictorial Stimuli, Visual Stimuli, Visual Perception, Visual Discrimination
Peer reviewed Peer reviewed
Direct linkDirect link
Ip, Horace H. S.; Lai, Candy Hoi-Yan; Wong, Simpson W. L.; Tsui, Jenny K. Y.; Li, Richard Chen; Lau, Kate Shuk-Ying; Chan, Dorothy F. Y. – Cogent Education, 2017
Previous research has illustrated the unique benefits of three-dimensional (3-D) Virtual Reality (VR) technology in Autism Spectrum Disorder (ASD) children. This study examined the use of 3-D VR technology as an assessment tool in ASD children, and further compared its use to two-dimensional (2-D) tasks. Additionally, we aimed to examine…
Descriptors: Autism, Pervasive Developmental Disorders, Simulated Environment, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Corbetta, Daniela; Guan, Yu; Williams, Joshua L. – Infancy, 2012
This paper presents two methods that we applied to our research to record infant gaze in the context of goal-oriented actions using different eye-tracking devices: head-mounted and remote eye-tracking. For each type of eye-tracking system, we discuss their advantages and disadvantages, describe the particular experimental setups we used to study…
Descriptors: Video Technology, Infants, Spatial Ability, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Soussignan, Robert; Courtial, Alexis; Canet, Pierre; Danon-Apter, Gisele; Nadel, Jacqueline – Developmental Science, 2011
No evidence had been provided so far of newborns' capacity to give a matching response to 2D stimuli. We report evidence from 18 newborns who were presented with three types of stimuli on a 2D screen. The stimuli were video-recorded displays of tongue protrusion shown by: (a) a human face, (b) a human tongue from a disembodied mouth, and (c) an…
Descriptors: Video Technology, Visual Perception, Visual Stimuli, Neonates
Peer reviewed Peer reviewed
Direct linkDirect link
Balconi, Michela; Amenta, Simona; Ferrari, Chiara – Research in Autism Spectrum Disorders, 2012
ASD subjects are described as showing particular difficulty in decoding emotional patterns. This paper explored linguistic and conceptual skills in response to emotional stimuli presented as emotional faces, scripts (pictures) and interactive situations (videos). Participants with autism, Asperger syndrome and control participants were shown…
Descriptors: Video Technology, Scripts, Nonverbal Communication, Semantics
Li, Feng – ProQuest LLC, 2011
Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye's pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This…
Descriptors: Program Effectiveness, Human Body, Visual Stimuli, Video Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Jintao; Bernstein, Lynne E. – Journal of Experimental Psychology: Human Perception and Performance, 2011
When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called "McGurk effect"), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for…
Descriptors: Video Technology, Acoustics, Syllables, Auditory Stimuli
Peer reviewed Peer reviewed
Direct linkDirect link
ten Holt, G. A.; van Doorn, A. J.; de Ridder, H.; Reinders, M. J. T.; Hendriks, E. A. – Sign Language Studies, 2009
We present the results of an experiment on lexical recognition of human sign language signs in which the available perceptual information about handshape and hand orientation was manipulated. Stimuli were videos of signs from Sign Language of the Netherlands (SLN). The videos were processed to create four conditions: (1) one in which neither…
Descriptors: Sign Language, Visual Perception, Foreign Countries, Visual Stimuli
Previous Page | Next Page ยป
Pages: 1  |  2