NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stanojevic, Miloš; Brennan, Jonathan R.; Dunagan, Donald; Steedman, Mark; Hale, John T. – Cognitive Science, 2023
To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad-coverage tools from natural-language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context-free grammars (CFGs), yet such formalisms are not…
Descriptors: Correlation, Language Processing, Brain Hemisphere Functions, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel Swingley; Robin Algayres – Cognitive Science, 2024
Computational models of infant word-finding typically operate over transcriptions of infant-directed speech corpora. It is now possible to test models of word segmentation on speech materials, rather than transcriptions of speech. We propose that such modeling efforts be conducted over the speech of the experimental stimuli used in studies…
Descriptors: Sentences, Word Recognition, Psycholinguistics, Infants
Peer reviewed Peer reviewed
Direct linkDirect link
Ramotowska, Sonia; Steinert-Threlkeld, Shane; Maanen, Leendert; Szymanik, Jakub – Cognitive Science, 2023
According to logical theories of meaning, a meaning of an expression can be formalized and encoded in truth conditions. Vagueness of the language and individual differences between people are a challenge to incorporate into the meaning representations. In this paper, we propose a new approach to study truth-conditional representations of vague…
Descriptors: Computation, Models, Semantics, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Alberto Testoni; Raffaella Bernardi; Azzurra Ruggeri – Cognitive Science, 2023
In recent years, a multitude of datasets of human-human conversations has been released for the main purpose of training conversational agents based on data-hungry artificial neural networks. In this paper, we argue that datasets of this sort represent a useful and underexplored source to validate, complement, and enhance cognitive studies on…
Descriptors: Questioning Techniques, Cognitive Science, Natural Language Processing, Data Use
Peer reviewed Peer reviewed
Direct linkDirect link
Kocab, Annemarie; Davidson, Kathryn; Snedeker, Jesse – Cognitive Science, 2022
Classical quantifiers (like "all," "some," and "none") express relationships between two sets, allowing us to make generalizations (like "no elephants fly"). Devices like these appear to be universal in human languages. Is the ubiquity of quantification due to a universal property of the human mind or is it…
Descriptors: Natural Language Processing, Form Classes (Languages), Cognitive Processes, Spanish
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Hang; Frank, Michael C.; Kulkarni, Vivek; Fourtassi, Abdellah – Cognitive Science, 2022
The linguistic input children receive across early childhood plays a crucial role in shaping their knowledge about the world. To study this input, researchers have begun applying distributional semantic models to large corpora of child-directed speech, extracting various patterns of word use/co-occurrence. Previous work using these models has not…
Descriptors: Caregivers, Caregiver Child Relationship, Linguistic Input, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Hoppe, Dorothée B.; Rij, Jacolien; Hendriks, Petra; Ramscar, Michael – Cognitive Science, 2020
Linguistic category learning has been shown to be highly sensitive to linear order, and depending on the task, differentially sensitive to the information provided by preceding category markers ("premarkers," e.g., gendered articles) or succeeding category markers ("postmarkers," e.g., gendered suffixes). Given that numerous…
Descriptors: Discrimination Learning, Computational Linguistics, Natural Language Processing, Artificial Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Dasgupta, Ishita; Guo, Demi; Gershman, Samuel J.; Goodman, Noah D. – Cognitive Science, 2020
As modern deep networks become more complex, and get closer to human-like capabilities in certain domains, the question arises as to how the representations and decision rules they learn compare to the ones in humans. In this work, we study representations of sentences in one such artificial system for natural language processing. We first present…
Descriptors: Natural Language Processing, Man Machine Systems, Heuristics, Sentences
Peer reviewed Peer reviewed
Direct linkDirect link
Richie, Russell; Bhatia, Sudeep – Cognitive Science, 2021
Similarity is one of the most important relations humans perceive, arguably subserving category learning and categorization, generalization and discrimination, judgment and decision making, and other cognitive functions. Researchers have proposed a wide range of representations and metrics that could be at play in similarity judgment, yet have not…
Descriptors: Classification, Generalization, Decision Making, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Johns, Brendan T.; Jamieson, Randall K. – Cognitive Science, 2018
The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers,…
Descriptors: Statistical Analysis, Written Language, Models, Language Enrichment
Peer reviewed Peer reviewed
Direct linkDirect link
Lau, Jey Han; Clark, Alexander; Lappin, Shalom – Cognitive Science, 2017
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary…
Descriptors: Grammar, Probability, Sentences, Language Research
Peer reviewed Peer reviewed
Direct linkDirect link
Krahmer, Emiel; Koolen, Ruud; Theune, Mariet – Cognitive Science, 2012
In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…
Descriptors: Natural Language Processing, Mathematics, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard – Cognitive Science, 2012
A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…
Descriptors: Natural Language Processing, Mathematics, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard – Cognitive Science, 2012
This response discusses the experiment reported in Krahmer et al.'s Letter to the Editor of "Cognitive Science". We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from "normal" (i.e.,…
Descriptors: Experiments, Natural Language Processing, Mathematics, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon – Cognitive Science, 2015
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
Descriptors: Grammar, Natural Language Processing, Computer Mediated Communication, Graphs
Previous Page | Next Page »
Pages: 1  |  2