Publication Date
In 2020 | 0 |
Since 2019 | 18 |
Since 2016 (last 5 years) | 48 |
Since 2011 (last 10 years) | 124 |
Since 2001 (last 20 years) | 183 |
Descriptor
Test Items | 344 |
Item Response Theory | 112 |
Difficulty Level | 78 |
Test Construction | 73 |
Item Analysis | 72 |
Statistical Analysis | 66 |
Test Bias | 60 |
Simulation | 50 |
Comparative Analysis | 49 |
Higher Education | 48 |
Test Reliability | 48 |
More ▼ |
Source
Educational and Psychological… | 344 |
Author
Publication Type
Education Level
Higher Education | 16 |
Elementary Education | 12 |
Postsecondary Education | 12 |
Secondary Education | 11 |
Junior High Schools | 6 |
Middle Schools | 6 |
Grade 4 | 5 |
Early Childhood Education | 4 |
Grade 5 | 4 |
Grade 8 | 4 |
High Schools | 4 |
More ▼ |
Audience
Location
Germany | 6 |
Canada | 5 |
Australia | 4 |
Saudi Arabia | 4 |
South Korea | 4 |
United States | 4 |
Israel | 3 |
Netherlands | 3 |
Taiwan | 3 |
California | 2 |
Florida | 2 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019
This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…
Descriptors: Test Validity, Test Reliability, Test Items, Correlation
List, Marit Kristine; Köller, Olaf; Nagy, Gabriel – Educational and Psychological Measurement, 2019
Tests administered in studies of student achievement often have a certain amount of not-reached items (NRIs). The propensity for NRIs may depend on the proficiency measured by the test and on additional covariates. This article proposes a semiparametric model to study such relationships. Our model extends Glas and Pimentel's item response theory…
Descriptors: Educational Assessment, Item Response Theory, Multivariate Analysis, Test Items
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Zopluoglu, Cengiz – Educational and Psychological Measurement, 2019
Researchers frequently use machine-learning methods in many fields. In the area of detecting fraud in testing, there have been relatively few studies that have used these methods to identify potential testing fraud. In this study, a technical review of a recently developed state-of-the-art algorithm, Extreme Gradient Boosting (XGBoost), is…
Descriptors: Identification, Test Items, Deception, Cheating
Han, Kyung T.; Dimitrov, Dimiter M.; Al-Mashary, Faisal – Educational and Psychological Measurement, 2019
The "D"-scoring method for scoring and equating tests with binary items proposed by Dimitrov offers some of the advantages of item response theory, such as item-level difficulty information and score computation that reflects the item difficulties, while retaining the merits of classical test theory such as the simplicity of number…
Descriptors: Test Construction, Scoring, Test Items, Adaptive Testing
Kalinowski, Steven T. – Educational and Psychological Measurement, 2019
Item response theory (IRT) is a statistical paradigm for developing educational tests and assessing students. IRT, however, currently lacks an established graphical method for examining model fit for the three-parameter logistic model, the most flexible and popular IRT model in educational testing. A method is presented here to do this. The graph,…
Descriptors: Item Response Theory, Educational Assessment, Goodness of Fit, Probability
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019
This note highlights and illustrates the links between item response theory and classical test theory in the context of polytomous items. An item response modeling procedure is discussed that can be used for point and interval estimation of the individual true score on any item in a measuring instrument or item set following the popular and widely…
Descriptors: Correlation, Item Response Theory, Test Items, Scores
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-Sadaawi, Abdullah – Educational and Psychological Measurement, 2019
The purpose of the present study was to apply the methodology developed by Raykov on modeling item-specific variance for the measurement of internal consistency reliability with longitudinal data. Participants were a randomly selected sample of 500 individuals who took on a professional qualifications test in Saudi Arabia over four different…
Descriptors: Test Reliability, Test Items, Longitudinal Studies, Foreign Countries
Jordan, Pascal; Spiess, Martin – Educational and Psychological Measurement, 2019
Factor loadings and item discrimination parameters play a key role in scale construction. A multitude of heuristics regarding their interpretation are hardwired into practice--for example, neglecting low loadings and assigning items to exactly one scale. We challenge the common sense interpretation of these parameters by providing counterexamples…
Descriptors: Test Construction, Test Items, Item Response Theory, Factor Structure
Lin, Chuan-Ju; Chang, Hua-Hua – Educational and Psychological Measurement, 2019
For item selection in cognitive diagnostic computerized adaptive testing (CD-CAT), ideally, a single item selection index should be created to simultaneously regulate precision, exposure status, and attribute balancing. For this purpose, in this study, we first proposed an attribute-balanced item selection criterion, namely, the standardized…
Descriptors: Test Items, Selection Criteria, Computer Assisted Testing, Adaptive Testing
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Qiu, Yuxi; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
This study aimed to assess the accuracy of the empirical item characteristic curve (EICC) preequating method given the presence of test speededness. The simulation design of this study considered the proportion of speededness, speededness point, speededness rate, proportion of missing on speeded items, sample size, and test length. After crossing…
Descriptors: Accuracy, Equated Scores, Test Items, Nonparametric Statistics
Dimitrov, Dimiter M.; Luo, Yong – Educational and Psychological Measurement, 2019
An approach to scoring tests with binary items, referred to as D-scoring method, was previously developed as a classical analog to basic models in item response theory (IRT) for binary items. As some tests include polytomous items, this study offers an approach to D-scoring of such items and parallels the results with those obtained under the…
Descriptors: Scoring, Test Items, Item Response Theory, Psychometrics
DiStefano, Christine; McDaniel, Heather L.; Zhang, Liyun; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2019
A simulation study was conducted to investigate the model size effect when confirmatory factor analysis (CFA) models include many ordinal items. CFA models including between 15 and 120 ordinal items were analyzed with mean- and variance-adjusted weighted least squares to determine how varying sample size, number of ordered categories, and…
Descriptors: Factor Analysis, Effect Size, Data, Sample Size
Cetin-Berber, Dee Duygu; Sari, Halil Ibrahim; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
Routing examinees to modules based on their ability level is a very important aspect in computerized adaptive multistage testing. However, the presence of missing responses may complicate estimation of examinee ability, which may result in misrouting of individuals. Therefore, missing responses should be handled carefully. This study investigated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Error of Measurement, Research Problems