﻿ Comparing estimates with other Rasch software

# Comparing estimates with other Rasch software

There are many Rasch-specific software packages and IRT packages which can be configured for Rasch models. Each implements particular estimation approaches and other assumptions or specifications about the estimates. Comparing or combining measures across packages can be awkward. There are three main considerations:

(a) choice of origin or zero-point

(b) choice of user-scaling multiplier.

(c) handling of extreme (zero, minimum possible and perfect, maximum possible) scores.

Here is one approach:

Produce person measures from Winsteps and the other computer program on the same data set. For Winsteps set USCALE=1 and UIMEAN=0.

Cross-plot the person measures with the Winsteps estimates on the x-axis. (This is preferential to comparing on item estimates, because these are more parametrization-dependent.)

Draw a best-fit line through the measures, ignoring the measures for extreme scores.

The slope is the user-scaling multiplier to apply. You can do this with USCALE= slope.

The intercept is the correction for origin to apply when comparing measures. You can do this with UIMEAN= y-axis intercept.

The departure of extreme scores from the best-fit line requires adjustment. You can do this with EXTRSCORE=. This may take multiple runs of Winsteps. If the measures for perfect, maximum  possible scores are above the best-fit line, and those for zero, minimum possible scores are below, then decrease EXTRSCORE= in 0.1 increments or less. If vice-versa, then increase EXTRSCORE= in 0.1 increments or less.

With suitable choices of UIMEAN=, USCALE= and EXTRSCORE=, the crossplotted person measures should approximate the identity line.

The item estimates are now as equivalent as they can be even if, due to different choice of parametrization or estimation procedure, they appear very different.

You may notice scatter of the person measures around the identity line or obvious curvature. These could reflect differential weighting of the items in a response string, the imposition of prior distributions, the choice of approximation to the logistic function, the choice of parametrization of the Rasch model or other reasons. These are generally specific to each software program and become an additional source of error when comparing measures.

There are technical details at Estimation.

Winsteps (JMLE) vs. CMLE and RUMM (PMLE) estimates. CMLE and RUMM estimates are more central than Winsteps estimates. Winsteps estimates can be adjusted using STBIAS=.

Winsteps (JMLE) vs. ConQuest (MMLE), the estimate differences are primarily because:

1. ConQuest assumes a regular shape to the person-ability distribution. Winsteps does not.

2. ConQuest includes extreme person scores (zero and perfect) when estimating item difficulties. Winsteps does not.

There are also other technical differences, but these are usually inconsequential with large datasets.

Maximum Likelihood Estimates (MLE, any type) vs. Warm's Mean Weighted Likelihood Estimates (WLE): Warm estimates are usually slightly more central than MLE estimates.

Help for Winsteps Rasch Measurement Software: www.winsteps.com. Author: John Michael Linacre

The Languages of Love: draw a map of yours!

 Forum Rasch Measurement Forum to discuss any Rasch-related topic

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments, George Engelhard, Jr. & Stefanie Wind Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez
Winsteps Tutorials Facets Tutorials Rasch Discussion Groups

Coming Winsteps & Facets Events
May 22 - 24, 2018, Tues.-Thur. EALTA 2018 pre-conference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank Weiss-Motz), https://ealta2018.testdaf.de
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 27 - 29, 2018, Wed.-Fri. Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France., https://measurement2018.sciencesconf.org
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 25 - July 27, 2018, Wed.-Fri. Pacific-Rim Objective Measurement Symposium (PROMS), (Preconference workshops July 23-24, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences" www.promsociety.org
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

Our current URL is www.winsteps.com