Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 5 |
Descriptor
Abstract Reasoning | 5 |
Language Acquisition | 5 |
Memory | 5 |
Models | 5 |
Computational Linguistics | 4 |
Language Processing | 4 |
Artificial Intelligence | 3 |
Children | 3 |
Figurative Language | 3 |
Linguistic Theory | 3 |
Syntax | 3 |
More ▼ |
Source
First Language | 5 |
Author
Ambridge, Ben | 1 |
Caplan, Spencer | 1 |
Coumel, Marion | 1 |
Frank, Michael C. | 1 |
Hardy, Sophie M. | 1 |
Kachergis, George | 1 |
Knabe, Melina L. | 1 |
Kodner, Jordan | 1 |
Mahowald, Kyle | 1 |
Messenger, Katherine | 1 |
Schuler, Kathryn D. | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Evaluative | 4 |
Opinion Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Messenger, Katherine; Hardy, Sophie M.; Coumel, Marion – First Language, 2020
The authors argue that Ambridge's radical exemplar account of language cannot clearly explain all syntactic priming evidence, such as inverse preference effects ("greater" priming for less frequent structures), and the contrast between short-lived lexical boost and long-lived abstract priming. Moreover, without recourse to a level of…
Descriptors: Language Acquisition, Syntax, Priming, Criticism
Mahowald, Kyle; Kachergis, George; Frank, Michael C. – First Language, 2020
Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec -- which have been extremely successful in modern natural language processing (NLP) applications -- count? Although these models often have ample parametric complexity to store exemplars from their training data, they also…
Descriptors: Models, Language Processing, Computational Linguistics, Language Acquisition
Schuler, Kathryn D.; Kodner, Jordan; Caplan, Spencer – First Language, 2020
In 'Against Stored Abstractions,' Ambridge uses neural and computational evidence to make his case against abstract representations. He argues that storing only exemplars is more parsimonious -- why bother with abstraction when exemplar models with on-the-fly calculation can do everything abstracting models can and more -- and implies that his…
Descriptors: Language Processing, Language Acquisition, Computational Linguistics, Linguistic Theory
Knabe, Melina L.; Vlach, Haley A. – First Language, 2020
Ambridge argues that there is widespread agreement among child language researchers that learners store linguistic abstractions. In this commentary the authors first argue that this assumption is incorrect; anti-representationalist/exemplar views are pervasive in theories of child language. Next, the authors outline what has been learned from this…
Descriptors: Child Language, Children, Language Acquisition, Models
Ambridge, Ben – First Language, 2020
The goal of this article is to make the case for a radical exemplar account of child language acquisition, under which unwitnessed forms are produced and comprehended by on-the-fly analogy across multiple stored exemplars, weighted by their degree of similarity to the target with regard to the task at hand. Across the domains of (1) word meanings,…
Descriptors: Language Acquisition, Morphology (Languages), Phonetics, Phonology