NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3686
What Works Clearinghouse Rating
Showing 151 to 165 of 3,686 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Tongyun; Jiao, Hong; Macready, George B. – Educational and Psychological Measurement, 2016
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Descriptors: Item Response Theory, Psychometrics, Test Construction, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Preacher, Kristopher J. – Educational and Psychological Measurement, 2016
Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…
Descriptors: Error of Measurement, Error Correction, Multivariate Analysis, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Jungkyu; Yu, Hsiu-Ting – Educational and Psychological Measurement, 2016
The multilevel latent class model (MLCM) is a multilevel extension of a latent class model (LCM) that is used to analyze nested structure data structure. The nonparametric version of an MLCM assumes a discrete latent variable at a higher-level nesting structure to account for the dependency among observations nested within a higher-level unit. In…
Descriptors: Hierarchical Linear Modeling, Nonparametric Statistics, Data Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg-Tobias – Educational and Psychological Measurement, 2016
In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…
Descriptors: Reaction Time, Models, Multivariate Analysis, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Gwet, Kilem L. – Educational and Psychological Measurement, 2016
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
Descriptors: Differences, Correlation, Statistical Significance, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Yoonjeong; McNeish, Daniel M.; Hancock, Gregory R. – Educational and Psychological Measurement, 2016
Although differences in goodness-of-fit indices (?GOFs) have been advocated for assessing measurement invariance, studies that advanced recommended differential cutoffs for adjudicating invariance actually utilized a very limited range of values representing the quality of indicator variables (i.e., magnitude of loadings). Because quality of…
Descriptors: Measurement, Goodness of Fit, Guidelines, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lance, Charles E.; Fan, Yi – Educational and Psychological Measurement, 2016
We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…
Descriptors: Multitrait Multimethod Techniques, Factor Analysis, Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Menold, Natalja; Raykov, Tenko – Educational and Psychological Measurement, 2016
This article examines the possible dependency of composite reliability on presentation format of the elements of a multi-item measuring instrument. Using empirical data and a recent method for interval estimation of group differences in reliability, we demonstrate that the reliability of an instrument need not be the same when polarity of the…
Descriptors: Test Reliability, Test Format, Test Items, Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Padilla, Miguel A.; Divers, Jasmin – Educational and Psychological Measurement, 2016
Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…
Descriptors: Reliability, Computation, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Babcock, Ben – Educational and Psychological Measurement, 2016
Continuously administered examination programs, particularly credentialing programs that require graduation from educational programs, often experience seasonality where distributions of examine ability may differ over time. Such seasonality may affect the quality of important statistical processes, such as item response theory (IRT) item…
Descriptors: Test Items, Item Response Theory, Computation, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2016
This article proposes a general parametric item response theory approach for identifying sources of misfit in response patterns that have been classified as potentially inconsistent by a global person-fit index. The approach, which is based on the weighted least squared regression of the observed responses on the model-expected responses, can be…
Descriptors: Regression (Statistics), Item Response Theory, Goodness of Fit, Affective Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Edwards, Julianne M. – Educational and Psychological Measurement, 2016
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Descriptors: Item Response Theory, Computation, Nonparametric Statistics, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Engelhard, George, Jr. – Educational and Psychological Measurement, 2016
Mokken scale analysis is a probabilistic nonparametric approach that offers statistical and graphical tools for evaluating the quality of social science measurement without placing potentially inappropriate restrictions on the structure of a data set. In particular, Mokken scaling provides a useful method for evaluating important measurement…
Descriptors: Nonparametric Statistics, Statistical Analysis, Measurement, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D. – Educational and Psychological Measurement, 2016
The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…
Descriptors: Learning Disabilities, Test Validity, Measures (Individuals), Hierarchical Linear Modeling
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  246