NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Furgol, Katherine E.; Ho, Andrew D.; Zimmerman, Dale L. – Educational and Psychological Measurement, 2010
Under the No Child Left Behind Act, large-scale test score trend analyses are widespread. These analyses often gloss over interesting changes in test score distributions and involve unrealistic assumptions. Further complications arise from analyses of unanchored, censored assessment data, or proportions of students lying within performance levels…
Descriptors: Trend Analysis, Sample Size, Federal Legislation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Ferdous, Abdullah A.; Plake, Barbara S. – Educational and Psychological Measurement, 2008
Even when the scoring of an examination is based on item response theory (IRT), standard-setting methods seldom use this information directly when determining the minimum passing score (MPS) for an examination from an Angoff-based standard-setting study. Often, when IRT scoring is used, the MPS value for a test is converted to an IRT-based theta…
Descriptors: Standard Setting (Scoring), Scoring, Cutting Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ferdous, Abdullah A.; Plake, Barbara S. – Educational and Psychological Measurement, 2007
In an Angoff standard setting procedure, judges estimate the probability that a hypothetical randomly selected minimally competent candidate will answer correctly each item in the test. In many cases, these item performance estimates are made twice, with information shared with the panelists between estimates. Especially for long tests, this…
Descriptors: Test Items, Probability, Item Analysis, Standard Setting (Scoring)