NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers7
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Showing 1 to 15 of 131 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Bradshaw, Laine; Levy, Roy – Educational Measurement: Issues and Practice, 2019
Although much research has been conducted on the psychometric properties of cognitive diagnostic models, they are only recently being used in operational settings to provide results to examinees and other stakeholders. Using this newer class of models in practice comes with a fresh challenge for diagnostic assessment developers: effectively…
Descriptors: Data Interpretation, Probability, Classification, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Educational Measurement: Issues and Practice, 2019
Rater training is an important part of developing and conducting large-scale constructed-response assessments. As part of this process, candidate raters have to pass a certification test to confirm that they are able to score consistently and accurately before they begin scoring operationally. Moreover, many assessment programs require raters to…
Descriptors: Evaluators, Certification, High Stakes Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Fidler, James R.; Risk, Nicole M. – Educational Measurement: Issues and Practice, 2019
Credentialing examination developers rely on task (job) analyses for establishing inventories of task and knowledge areas in which competency is required for safe and successful practice in target occupations. There are many ways in which task-related information may be gathered from practitioner ratings, each with its own advantage and…
Descriptors: Job Analysis, Scaling, Licensing Examinations (Professions), Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Measurement: Issues and Practice, 2019
The current study investigated how item formats and their inherent affordances influence test-takers' cognition under uncertainty. Adult participants solved content-equivalent math items in multiple-selection multiple-choice and four alternative grid formats. The results indicated that participants' affirmative response tendency (i.e., judge the…
Descriptors: Affordances, Test Items, Test Format, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Evelyn S.; Crawford, Angela; Moylan, Laura A.; Zheng, Yuzhu – Educational Measurement: Issues and Practice, 2018
The evidence-centered design framework was used to create a special education teacher observation system, Recognizing Effective Special Education Teachers. Extensive reviews of research informed the domain analysis and modeling stages, and led to the conceptual framework in which effective special education teaching is operationalized as the…
Descriptors: Evidence Based Practice, Special Education Teachers, Observation, Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Educational Measurement: Issues and Practice, 2018
The choice of anchor tests is crucial in applications of the nonequivalent groups with anchor test design of equating. Sinharay and Holland (2006, 2007) suggested "miditests," which are anchor tests that are content-representative and have the same mean item difficulty as the total test but have a smaller spread of item difficulties.…
Descriptors: Test Content, Difficulty Level, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Gotch, Chad M.; Roduta Roberts, Mary – Educational Measurement: Issues and Practice, 2018
As the primary interface between test developers and multiple educational stakeholders, score reports are a critical component to the success (or failure) of any assessment program. The purpose of this review is to document recent research on individual-level score reporting to advance the research and practice of score reporting. We conducted a…
Descriptors: Scores, Models, Correlation, Stakeholders
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, A.; Merzdorf, H. E. – Educational Measurement: Issues and Practice, 2018
During the development of large-scale curricular achievement tests, recruited panels of independent subject-matter experts use systematic judgmental methods--often collectively labeled "alignment" methods--to rate the correspondence between a given test's items and the objective statements in a particular curricular standards document.…
Descriptors: Achievement Tests, Expertise, Alignment (Education), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Educational Measurement: Issues and Practice, 2017
The dominant narrative for assessment design seems to reflect a strong, albeit largely implicit undercurrent of purpose purism, which idealizes the principle that assessment design should be driven by a single assessment purpose. With a particular focus on achievement assessments, the present article questions the tenability of purpose purism,…
Descriptors: Evaluation Methods, Test Construction, Instructional Design, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Embretson, Susan E. – Educational Measurement: Issues and Practice, 2016
Examinees' thinking processes have become an increasingly important concern in testing. The responses processes aspect is a major component of validity, and contemporary tests increasingly involve specifications about the cognitive complexity of examinees' response processes. Yet, empirical research findings on examinees' cognitive processes are…
Descriptors: Testing, Cognitive Processes, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Davenport, Ernest C.; Davison, Mark L.; Liou, Pey-Yan; Love, Quintin U. – Educational Measurement: Issues and Practice, 2016
The main points of Sijtsma and Green and Yang in Educational Measurement: Issues and Practice (34, 4) are that reliability, internal consistency, and unidimensionality are distinct and that Cronbach's alpha may be problematic. Neither of these assertions are at odds with Davenport, Davison, Liou, and Love in the same issue. However, many authors…
Descriptors: Educational Assessment, Reliability, Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2016
Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…
Descriptors: Test Items, Test Construction, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Sijtsma, Klaas – Educational Measurement: Issues and Practice, 2015
I discuss the contribution by Davenport, Davison, Liou, & Love (2015) in which they relate reliability represented by coefficient a to formal definitions of internal consistency and unidimensionality, both proposed by Cronbach (1951). I argue that coefficient a is a lower bound to reliability and that concepts of internal consistency and…
Descriptors: Reliability, Mathematics, Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Dorans, Neil J. – Educational Measurement: Issues and Practice, 2013
We make a distinction between two types of test changes: inevitable deviations from specifications versus planned modifications of specifications. We describe how score equity assessment (SEA) can be used as a tool to assess a critical aspect of construct continuity, the equivalence of scores, whenever planned changes are introduced to testing…
Descriptors: Tests, Test Construction, Test Format, Change
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9