NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)4
Since 2006 (last 20 years)5
Education Level
Higher Education1
Laws, Policies, & Programs
Assessments and Surveys
Test of Written English1
What Works Clearinghouse Rating
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Assessment in Education: Principles, Policy & Practice, 2017
In response to an argument by Baird, Andrich, Hopfenbeck and Stobart (2017), Michael Kane states that there needs to be a better fit between educational assessment and learning theory. In line with this goal, Kane will examine how psychometric constraints might be loosened by relaxing some psychometric "rules" in some assessment…
Descriptors: Educational Assessment, Psychometrics, Standards, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Davenport, Ernest C.; Davison, Mark L.; Liou, Pey-Yan; Love, Quintin U. – Educational Measurement: Issues and Practice, 2016
The main points of Sijtsma and Green and Yang in Educational Measurement: Issues and Practice (34, 4) are that reliability, internal consistency, and unidimensionality are distinct and that Cronbach's alpha may be problematic. Neither of these assertions are at odds with Davenport, Davison, Liou, and Love in the same issue. However, many authors…
Descriptors: Educational Assessment, Reliability, Validity, Test Construction
Hartley, James – Psychology Teaching Review, 2017
In this article, Hartley notes the difficulties of using questionnaires to assess the efficiency of new instructional methods and highlights nine issues that researchers must consider. Hartley continues the discussion about the use of questionnaires and suggests that psychology teachers can help improve the teaching of psychology by drawing…
Descriptors: Questionnaires, Instructional Innovation, Instructional Effectiveness, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Measurement: Interdisciplinary Research and Perspectives, 2015
Koretz, in his article published in this issue, provides compelling arguments that the high stakes currently associated with accountability testing lead to behavioral changes in students, teachers, and other stakeholders that often have negative consequences, such as inflated scores. Koretz goes on to argue that these negative consequences require…
Descriptors: Accountability, High Stakes Tests, Behavior Change, Student Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Murawska, Jaclyn M.; Walker, David A. – Mid-Western Educational Researcher, 2017
In this commentary, we offer a set of visual tools that can assist education researchers, especially those in the field of mathematics, in developing cohesiveness from a mixed methods perspective, commencing at a study's research questions and literature review, through its data collection and analysis, and finally to its results. This expounds…
Descriptors: Mixed Methods Research, Research Methodology, Visual Aids, Research Tools
Salies, Tania Gastao – 1998
A discussion of the evaluation of writing, particularly in English as a Second Language, argues for a communicative approach reflecting the current approach to language teaching and learning. The movement toward more communication-oriented and more valid language testing is examined briefly, and direct assessment is chosen as the preferred format…
Descriptors: Communicative Competence (Languages), English (Second Language), Evaluation Criteria, Foreign Countries
Wainer, Howard – 1982
This paper is the transcript of a talk given to those who use test information but who have little technical background in test theory. The concepts of modern test theory are compared with traditional test theory, as well as a probable future test theory. The explanations given are couched within an extended metaphor that allows a full description…
Descriptors: Difficulty Level, Latent Trait Theory, Metaphors, Test Items
Ediger, Marlow – 2001
To assure the fair and honest grading of student achievement, validity and reliability are key to writing test items. Clarity in writing each item is essential. Multiple procedures of assessing the achievement of university students should be implemented, and instructors and professors should be held accountable for the fair and honest grading of…
Descriptors: Academic Achievement, College Students, Educational Technology, Grades (Scholastic)
Peer reviewed Peer reviewed
Greer, Darryl – Review of Higher Education, 1984
The legislative history of legislation concerning disclosure of standardized college admissions test items is reviewed, the effect of existing laws in California and New York is outlined, and public policy and legal questions leading to and resulting from the legislation are discussed. (MSE)
Descriptors: College Entrance Examinations, Disclosure, Higher Education, Legal Problems
Fishman, Judith – Writing Program Administration, 1984
Examines the CUNY-WAT program and questions many aspects of it, especially the choice and phrasing of topics. (FL)
Descriptors: Essay Tests, Higher Education, Test Format, Test Items
Ebel, Robert L. – 1981
An alternate-choice test item is a simple declarative sentence, one portion of which is given with two different wordings. For example, "Foundations like Ford and Carnegie tend to be (1) eager (2) hesitant to support innovative solutions to educational problems." The examinee's task is to choose the alternative that makes the sentence…
Descriptors: Comparative Testing, Difficulty Level, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
Kolstad, Rosemarie; And Others – Journal of Dental Education, 1982
Nonrestricted-answer, multiple-choice test items are recommended as a way of including more facts and fewer incorrect answers in test items, and they do not cue successful guessing as restricted multiple choice items can. Examination construction, scoring, and reliability are discussed. (MSE)
Descriptors: Guessing (Tests), Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Hanna, Gerald S.; Bennett, Judith A. – Educational and Psychological Measurement, 1984
The presently viewed role and utility of measures of instructional sensitivity are summarized. A case is made that the rationale for the assessment of instructional sensitivity can be applied to all achievement tests and should not be restricted to criterion-referenced mastery tests. (Author/BW)
Descriptors: Achievement Tests, Context Effect, Criterion Referenced Tests, Mastery Tests
Peer reviewed Peer reviewed
Gross, Leon J. – Journal of Optometric Education, 1982
A critique of a variety of formats used in combined-response test items (those in which the respondent must choose the correct combination of options: a and b, all of the above, etc.) illustrates why this kind of testing is inherently flawed and should not be used in optometry examinations. (MSE)
Descriptors: Higher Education, Multiple Choice Tests, Optometry, Standardized Tests
Peer reviewed Peer reviewed
Descy, Don E. – International Journal of Instructional Media, 1991
Describes the development of an affective measure of student attitudes toward their advisors, called the Descy Attitude toward Advisors scale (DATAs). Content validity is discussed, along with construct validity and alpha reliability data gathered on 120 graduate students in the fields of education and nursing. A copy of DATAs is appended.…
Descriptors: Academic Advising, Attitude Measures, Faculty Advisers, Higher Education
Previous Page | Next Page ยป
Pages: 1  |  2