NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission32
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Dascalu, Marina-Dorinela; Ruseti, Stefan; Dascalu, Mihai; McNamara, Danielle; Trausan-Matu, Stefan – Grantee Submission, 2020
Reading comprehension requires readers to connect ideas within and across texts to produce a coherent mental representation. One important factor in that complex process regards the cohesion of the document(s). Here, we tackle the challenge of providing researchers and practitioners with a tool to visualize text cohesion both within (intra) and…
Descriptors: Network Analysis, Graphs, Connected Discourse, Reading Comprehension
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Patience Stevens; David C. Plaut – Grantee Submission, 2022
The morphological structure of complex words impacts how they are processed during visual word recognition. This impact varies over the course of reading acquisition and for different languages and writing systems. Many theories of morphological processing rely on a decomposition mechanism, in which words are decomposed into explicit…
Descriptors: Written Language, Morphology (Languages), Word Recognition, Reading Processes
Corlatescu, Dragos-Georgian; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader's mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of…
Descriptors: Reading Comprehension, Reading Processes, Memory, Schemata (Cognition)
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Theories of discourse argue that comprehension depends on the coherence of the learner's mental representation. Our aim is to create a reliable automated representation to estimate readers' level of comprehension based on different productions, namely self-explanations and answers to open-ended questions. Previous work relied on Cohesion Network…
Descriptors: Network Analysis, Reading Comprehension, Automation, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Robert-Mihai Botarleanu; Micah Watanabe; Mihai Dascalu; Scott A. Crossley; Danielle S. McNamara – Grantee Submission, 2023
Age of Acquisition (AoA) scores approximate the age at which a language speaker fully understands a word's semantic meaning and represent a quantitative measure of the relative difficulty of words in a language. AoA word lists exist across various languages, with English having the most complete lists that capture the largest percentage of the…
Descriptors: Multilingualism, English (Second Language), Second Language Learning, Second Language Instruction
Corlatescu, Dragos-Georgian; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader's mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of…
Descriptors: Reading Comprehension, Memory, Inferences, Syntax
Daniel P. Feller; Amani Talwar; Daphne Greenberg; Ryan D. Kopatich; Joseph P. Magliano – Grantee Submission, 2023
Background: A significant portion of adults struggle to read at a basic level. Word reading (defined here as decoding and word recognition) appears to play a pivotal role for this population of readers; however, less is known about how word reading relates to other important semantic processes (e.g., vocabulary, sentence processing) known to…
Descriptors: Correlation, Word Recognition, Reading Comprehension, Reading Processes
Guerrero, Tricia A.; Wiley, Jennifer – Grantee Submission, 2019
Teachers may wish to use open-ended learning activities and tests, but they are burdensome to assess compared to forced-choice instruments. At the same time, forced-choice assessments suffer from issues of guessing (when used as tests) and may not encourage valuable behaviors of construction and generation of understanding (when used as learning…
Descriptors: Computer Assisted Testing, Student Evaluation, Introductory Courses, Psychology
Crossley, Scott; Wan, Qian; Allen, Laura; McNamara, Danielle – Grantee Submission, 2021
Synthesis writing is widely taught across domains and serves as an important means of assessing writing ability, text comprehension, and content learning. Synthesis writing differs from other types of writing in terms of both cognitive and task demands because it requires writers to integrate information across source materials. However, little is…
Descriptors: Writing Skills, Cognitive Processes, Essays, Cues
Johns, Brendan T.; Jones, Michael N.; Mewhort, D. J. K. – Grantee Submission, 2019
To account for natural variability in cognitive processing, it is standard practice to optimize a model's parameters by fitting it to behavioral data. Although most language-related theories acknowledge a large role for experience in language processing, variability reflecting that knowledge is usually ignored when evaluating a model's fit to…
Descriptors: Language Processing, Models, Information Sources, Linguistics
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural language processing (NLP) to infer the level and type of inferencing within readers' answers to comprehension questions using linguistic and semantic features within their responses.…
Descriptors: Natural Language Processing, Taxonomy, Responses, Semantics
Li, Haiying; Cai, Zhiqiang; Graesser, Arthur – Grantee Submission, 2018
In this study we developed and evaluated a crowdsourcing-based latent semantic analysis (LSA) approach to computerized summary scoring (CSS). LSA is a frequently used mathematical component in CSS, where LSA similarity represents the extent to which the to-be-graded target summary is similar to a model summary or a set of exemplar summaries.…
Descriptors: Computer Assisted Testing, Scoring, Semantics, Evaluation Methods
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie N.; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular…
Descriptors: Computational Linguistics, Feedback (Response), Classification, Learning Processes
Previous Page | Next Page ยป
Pages: 1  |  2  |  3