NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…43
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Biancarosa, Gina; Kennedy, Patrick C.; Carlson, Sarah E.; Yoon, HyeonJin; Seipel, Ben; Liu, Bowen; Davison, Mark L. – Educational and Psychological Measurement, 2019
Prior research suggests that subscores from a single achievement test seldom add value over a single total score. Such scores typically correspond to subcontent areas in the total content domain, but content subdomains might not provide a sound basis for subscores. Using scores on an inferential reading comprehension test from 625 third, fourth,…
Descriptors: Scores, Scoring, Achievement Tests, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Chunhua; Kim, Eun Sook; Chen, Yi-Hsin; Ferron, John; Stark, Stephen – Educational and Psychological Measurement, 2019
In multilevel multiple-indicator multiple-cause (MIMIC) models, covariates can interact at the within level, at the between level, or across levels. This study examines the performance of multilevel MIMIC models in estimating and detecting the interaction effect of two covariates through a simulation and provides an empirical demonstration of…
Descriptors: Hierarchical Linear Modeling, Structural Equation Models, Computation, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Alzen, Jessica L. – Educational and Psychological Measurement, 2019
Observation protocol scores are commonly used as status measures to support inferences about teacher practices. When multiple observations are collected for the same teacher over the course of a year, some portion of a teacher's score on each occasion may be attributable to the rater, lesson, and the time of year of the observation. All three of…
Descriptors: Observation, Inferences, Generalizability Theory, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2017
This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated…
Descriptors: Psychometrics, Test Items, Item Response Theory, Hypothesis Testing
Lockwood, J. R.; Castellano, Katherine E. – Educational and Psychological Measurement, 2017
Student Growth Percentiles (SGPs) increasingly are being used in the United States for inferences about student achievement growth and educator effectiveness. Emerging research has indicated that SGPs estimated from observed test scores have large measurement errors. As such, little is known about "true" SGPs, which are defined in terms…
Descriptors: Item Response Theory, Correlation, Student Characteristics, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D. – Educational and Psychological Measurement, 2016
The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…
Descriptors: Learning Disabilities, Test Validity, Measures (Individuals), Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L.; Cornell, Dewey G. – Educational and Psychological Measurement, 2016
Bullying among youth is recognized as a serious student problem, especially in middle school. The most common approach to measuring bullying is through student self-report surveys that ask questions about different types of bullying victimization. Although prior studies have shown that question-order effects may influence participant responses, no…
Descriptors: Victims of Crime, Bullying, Middle School Students, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Shao, Can; Lathrop, Quinn N. – Educational and Psychological Measurement, 2016
Due to its flexibility, the multiple-indicator, multiple-causes (MIMIC) model has become an increasingly popular method for the detection of differential item functioning (DIF). In this article, we propose the mediated MIMIC model method to uncover the underlying mechanism of DIF. This method extends the usual MIMIC model by including one variable…
Descriptors: Test Bias, Models, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Feiming; Cohen, Allan; Bottge, Brian; Templin, Jonathan – Educational and Psychological Measurement, 2016
Latent transition analysis (LTA) was initially developed to provide a means of measuring change in dynamic latent variables. In this article, we illustrate the use of a cognitive diagnostic model, the DINA model, as the measurement model in a LTA, thereby demonstrating a means of analyzing change in cognitive skills over time. An example is…
Descriptors: Statistical Analysis, Change, Thinking Skills, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Konstantopoulos, Spyros; Li, Wei; Miller, Shazia R.; van der Ploeg, Arie – Educational and Psychological Measurement, 2016
We use data from a large-scale experiment conducted in Indiana in 2009-2010 to examine the impact of two interim assessment programs (mCLASS and Acuity) across the mathematics and reading achievement distributions. Specifically, we focus on whether the use of interim assessments has a particularly strong effect on improving outcomes for low…
Descriptors: Educational Assessment, Mathematics Achievement, Reading Achievement, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal; Laitusis, Cara; Stone, Elizabeth – Educational and Psychological Measurement, 2016
There are many reasons to believe that open-ended (OE) and multiple-choice (MC) items elicit different cognitive demands of students. However, empirical evidence that supports this view is lacking. In this study, we investigated the reactions of test takers to an interactive assessment with immediate feedback and answer-revision opportunities for…
Descriptors: Test Items, Questioning Techniques, Differences, Student Reaction
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wetzel, Eunike; Xu, Xueli; von Davier, Matthias – Educational and Psychological Measurement, 2015
In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…
Descriptors: Surveys, Regression (Statistics), Models, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Nezhnov, Peter; Kardanova, Elena; Vasilyeva, Marina; Ludlow, Larry – Educational and Psychological Measurement, 2015
The present study tested the possibility of operationalizing levels of knowledge acquisition based on Vygotsky's theory of cognitive growth. An assessment tool (SAM-Math) was developed to capture a hypothesized hierarchical structure of mathematical knowledge consisting of procedural, conceptual, and functional levels. In Study 1, SAM-Math was…
Descriptors: Knowledge Level, Mathematics, Cognitive Development, Vertical Organization
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Previous Page | Next Page ยป
Pages: 1  |  2  |  3