NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers6
What Works Clearinghouse Rating
Showing 1 to 15 of 118 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Marion, Scott; Domaleski, Chris – Educational Measurement: Issues and Practice, 2019
This article offers a critique of the validity argument put forward by Camara, Mattern, Croft, and Vispoel (2019) regarding the use of college-admissions tests in high school assessment systems. We challenge their argument in two main ways. First, we illustrate why their argument fails to address broader issues related to consequences of using…
Descriptors: College Entrance Examinations, High School Students, Test Use, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Lazowski, Rory A.; Barron, Kenneth E.; Kosovich, Jeff J.; Hulleman, Chris S. – Educational Measurement: Issues and Practice, 2016
In an article published in "Educational Measurement: Issues and Practice," Gaertner and McClarty (2015) discuss a college readiness index based, in part, on nonacademic or noncognitive factors measured in middle school. Such an index is laudable as it incorporates important constructs beyond academic achievement measures that may be…
Descriptors: College Readiness, Measures (Individuals), Student Motivation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Mattern, Krista; Allen, Jeff; Camara, Wayne – Educational Measurement: Issues and Practice, 2016
The findings of the Gaertner and McClarty article (2015) raised awareness on two extremely important topics related to college readiness: First, to effect change, we must measure students' progression towards college readiness throughout their K-12 career rather than just at the culmination of high school. Second, college readiness encompasses…
Descriptors: College Readiness, Middle School Students, Measures (Individuals), Progress Monitoring
Peer reviewed Peer reviewed
Direct linkDirect link
Gaertner, Matthew N.; McClarty, Katie Larsen – Educational Measurement: Issues and Practice, 2016
This rejoinder provides a reply to comments on a middle school college readiness index, which was devised to generate earlier and more nuanced readiness diagnoses to K-12 students. Issues of reliability and validity (including construct underrepresentation and construct-irrelevant variance) are discussed in detail. In addition, comments from…
Descriptors: Middle School Students, College Readiness, Measures (Individuals), Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Davenport, Ernest C.; Davison, Mark L.; Liou, Pey-Yan; Love, Quintin U. – Educational Measurement: Issues and Practice, 2016
The main points of Sijtsma and Green and Yang in Educational Measurement: Issues and Practice (34, 4) are that reliability, internal consistency, and unidimensionality are distinct and that Cronbach's alpha may be problematic. Neither of these assertions are at odds with Davenport, Davison, Liou, and Love in the same issue. However, many authors…
Descriptors: Educational Assessment, Reliability, Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Haberman, Shelby; Boughton, Keith – Educational Measurement: Issues and Practice, 2015
Feinberg and Wainer (2014) provided a simple equation to approximate/predict a subscore's value. The purpose of this note is to point out that their equation is often inaccurate in that it does not always predict a subscore's value correctly. Therefore, the utility of their simple equation is not clear.
Descriptors: Equations (Mathematics), Scores, Prediction, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Pommerich, Mary – Educational Measurement: Issues and Practice, 2012
Neil Dorans has made a career of advocating for the examinee. He continues to do so in his NCME career award address, providing a thought-provoking commentary on some current trends in educational measurement that could potentially affect the integrity of test scores. Concerns expressed in the address call attention to a conundrum that faces…
Descriptors: Testing, Scores, Measurement, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Kingston, Neal; Nash, Brooke – Educational Measurement: Issues and Practice, 2012
In their critique of Kingston and Nash (2011), Briggs, Ruiz-Primo, Furtak, Shepard, and Yin (2012) make several major points. First, Kingston and Nash's conclusions about the state of research on the efficacy of formative assessment are similar to other researchers, "including some of the authors." Second, their research may be unique in that they…
Descriptors: Formative Evaluation, Meta Analysis, Effect Size, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Ruiz-Primo, Maria Araceli; Furtak, Erin; Shepard, Lorrie; Yin, Yue – Educational Measurement: Issues and Practice, 2012
In a recent article published in "EM:IP," Kingston and Nash report on the results of a meta-analysis on the efficacy of formative assessment. They conclude that the average effect of formative assessment on student achievement is about 0.20 SD units. This would seem to dispel the myth that effects between 0.40 and 0.70 can be attributed to…
Descriptors: Academic Achievement, Outcome Measures, Meta Analysis, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J. – Educational Measurement: Issues and Practice, 2012
This article presents the author's observations on Neil Dorans's NCME Career Award Address: "The Contestant Perspective on Taking Tests: Emanations from the Statue within." He calls attention to some points that Dr. Dorans made in his address, and offers his thoughts in response.
Descriptors: Testing, Test Reliability, Psychometrics, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Shepard, Lorrie A. – Educational Measurement: Issues and Practice, 2009
In many school districts, the pressure to raise test scores has created overnight celebrity status for formative assessment. Its powers to raise student achievement have been touted, however, without attending to the research on which these claims were based. Sociocultural learning theory provides theoretical grounding for understanding how…
Descriptors: Learning Theories, Validity, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
LaDuca, Tony – Educational Measurement: Issues and Practice, 2006
In the Spring 2005 issue, Wang, Schnipke, and Witt provided an informative description of the task inventory approach that centered on four functions of job analysis. The discussion included persuasive arguments for making systematic connections between tasks and KSAs. But several other facets of the discussion were much less persuasive. This…
Descriptors: Criticism, Task Analysis, Job Analysis, Persuasive Discourse
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Ning; Witt, Elizabeth A.; Schnipke, Deborah – Educational Measurement: Issues and Practice, 2006
"In his commentary to our paper on the use of knowledge, skill, and ability statements in developing credentialing examinations (Wang, Schnipke, & Witt, 2005 )," Dr. LaDuca set forth his concerns while commending our paper for providing helpful insights into the importance of careful delineation of KSAs. We believe that there is little substantive…
Descriptors: Criticism, Job Analysis, Licensing Examinations (Professions), Certification
Peer reviewed Peer reviewed
Direct linkDirect link
Schulz, E. Matthew – Educational Measurement: Issues and Practice, 2006
A look at real data shows that Reckase's psychometric theory for standard setting is not applicable to bookmark and that his simulations cannot explain actual differences between methods. It is suggested that exclusively test-centered, criterion-referenced approaches are too idealized and that a psychophysics paradigm and a theory of group…
Descriptors: Psychometrics, Group Behavior, Standard Setting, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Linn, Robert L. – Educational Measurement: Issues and Practice, 2006
The question of what it means to follow the "Standards" is discussed. It is argued that the "Standards" consists of statements of general principles, and that interpretation for specific applications requires professional judgment. As a result, disagreements among professionals on the applicability of particular standards to specific situations…
Descriptors: Standards, Accountability, Educational Testing, Context Effect
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8