NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Phelps, Geoffrey; Bridgeman, Brent; Yan, Fred; Steinberg, Jonathan; Weren, Barbara; Zhou, Jiawen – ETS Research Report Series, 2020
In this report we provide preliminary evidence on the measurement characteristics for a new type of teaching performance assessment designed to be combined with complementary assessments of teacher content knowledge. The resulting test, which we refer to as the Foundational Assessment of Competencies for Teaching (FACT), is designed for use as…
Descriptors: Teacher Competency Testing, Performance Based Assessment, Preservice Teachers, Teacher Certification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Klieger, David M.; Bridgeman, Brent; Tannenbaum, Richard J.; Cline, Frederick A.; Olivera-Aguilar, Margarita – ETS Research Report Series, 2018
Educational Testing Service (ETS), working with 21 U.S. law schools, evaluated the predictive validity of the GRE® General Test using a sample of 1,587 current and graduated law students. Results indicated that the GRE is a strong, generalizably valid predictor of first-year law school grades and that it provides useful information even when…
Descriptors: College Entrance Examinations, Graduate Study, Test Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Cho, Yeonsuk; DiPietro, Stephen – Language Testing, 2016
Data from 787 international undergraduate students at an urban university in the United States were used to demonstrate the importance of separating a sample into meaningful subgroups in order to demonstrate the ability of an English language assessment to predict the first-year grade point average (GPA). For example, when all students were pooled…
Descriptors: Grade Prediction, English Curriculum, Language Tests, Undergraduate Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models