Displacement measures

DISPLACE column should only appear with anchored or TARGET= runs. Otherwise its appearance indicates lack of convergence.  If small displacements are being shown, try tightening the convergence criteria, LCONV=.

 

Anchored analyses, IAFILE=, PAFILE=: if large displacements are shown for the anchored items or persons, try changing the setting of ANCESTIM=.

 

The displacement is an estimate of the amount to add to the MEASURE to make it conform with the data.

 

Positive displacement for a person ability indicates that the observed person score is higher than the expected person score based on the reported measure (usually an anchor value).

 

Positive displacement for an item difficulty indicates that the observed item score is lower than the expected item score based on the reported measure (usually an anchor value).

 

The DISPLACE value is the size of the change in the parameter estimate that would be observed in the next estimation iteration if this parameter was free (unanchored) and all other parameter estimates were anchored at their current values.

 

For a parameter (item or person) that is anchored in the main estimation, DISPLACE indicates the size of disagreement between an estimate based on the current data and the anchor value.

 

For an unanchored item, if the DISPLACE value is large enough to be of concern, then the convergence criteria are not tight enough LCONV=, RCONV=, CONVERGE=, MJMLE=

 

It is calculated using Newton-Raphson estimation.

 

Person: DISPLACE logits = (observed marginal score - expected marginal score)/(model variance of the marginal score)

 

Item:  DISPLACE logits = - (observed marginal score - expected marginal score)/(model variance of the marginal score)

 

DISPLACE approximates the displacement of the estimate away from the statistically better value which would result from the best fit of your data to the model. Each DISPLACE value is computed as though all other parameter estimates are exact. Only meaningfully large values are displayed. They indicate lack of convergence, or the presence of anchored or targeted values. The best fit value can be approximated by adding the displacement to the reported measure or calibration. It is computed as:

DISPLACE = (observed score - expected score based on reported measure) / (Rasch-model-derived score variance).

 

The "observed score" is the raw score for the person or item.

The "expected score" is the raw score that the Rasch model  expects based on the current values of person abilities and item difficulties.

The "Rasch-model-derived score variance" is the inverse of the standard error of the person or item, squared.

 

This value is the Newton-Raphson adjustment to the reported measure to obtain the measure estimated from the current data. In BTD, p. 64, equation 3.7.11: di(j) is the anchor value, di(j+1) is the value estimated from the current data, and di(j+1) - di(j) is the displacement, given by the right-hand term of the estimation equation, also in step 6 of www.rasch.org/rmt/rmt102t.htm. In RSA, p. 77, equation 4.4.6, di(t) is the anchor value, di(t+1) is the value estimated from the current data, and di(t+1) - di(t) is the displacement, given by the right-hand term of the estimation equation, also in step 6 of www.rasch.org/rmt/rmt122q.htm

 

Standard Error of the Displacement Measure

+----------------------------------------------------------------------------------------+

|ENTRY    RAW                   MODEL|   INFIT  |  OUTFIT  |PTMEA|        |              |

|NUMBER  SCORE  COUNT  MEASURE  S.E. |MNSQ  ZSTD|MNSQ  ZSTD|CORR.|DISPLACE| TAP          |

|------------------------------------+----------+----------+-----+--------+--------------|

|     3     35     35    2.00A    .74| .69   -.6| .22    .5|  .00|   -3.90| 1-2-4        |

 

Since the reported "measure" is treated as a constant when "displacement" is computed, the S.E. of the reported "measure" actually is the same as the S.E. of the displacement. The DISPLACE column shows the displacement in the same units as the MEASURE. This is logits when USCALE=1, the default. If the anchored measure value is considered to be exact, i.e., a point-estimate, then the S.E. standard error column indicates the standard error of the displacement. The statistical significance of the Displacement is given by

t = DISPLACE / S.E. with approximately COUNT degrees of freedom.

 

This evaluates how likely the reported size of the displacement is, if its "true" size is zero. But both the displacements and their standard errors are estimates, so the t-value may be slightly mis-estimated. Consequently allow for a margin of error when interpreting the t-values.

 

If the anchored measure value has a standard error obtained from a different data set, then the standard error of the displacement is:

S.E. (Displacement) = Sqrt(S.E.² + S.E.²(anchor value from original data) )

 

When does large displacement indicate that an item or person should be unanchored or omitted?

This depends on your purpose. If you are anchoring items in order to measure three additional people to add to your measured database of thousands, then item displacement doesn't matter.

 

Anchor values should be validated before they are used. Do two analyses:

(a) with no items anchored (i.e., all items floating), produce person and item measures.

(b) with anchored items anchored, produce person and item measures.

 

Then cross-plot the item difficulties for the two runs, and also the person measures. The person measures will usually form an almost straight line.

 

For the item difficulties, unanchored items will form a straight-line. Some anchored items may be noticeably off the line. These are candidates for dropping as anchors. The effect of dropping or un-anchoring a "displaced" anchor item is to realign the person measures by roughly (displacement / (number of remaining anchored items)).

 

Random displacements of less than 0.5 logits are unlikely to have much impact in a test instrument.

"In other work we have found that when [test length] is greater than 20, random values of [discrepancies in item calibration] as high as 0.50 [logits] have negligible effects on measurement." ( Wright & Douglas, 1976, "Rasch Item Analysis by Hand")

 

"They allow the test designer to incur item discrepancies, that is item calibration errors, as large as 1.0 [logit]. This may appear unnecessarily generous, since it permits use of an item of difficulty 2.0, say, when the design calls for 1.0, but it is offered as an upper limit because we found a large area of the test design domain to be exceptionally robust with respect to independent item discrepancies." (Wright & Douglas, 1975, "Best Test Design and Self-Tailored Testing.")

 

Most DIF work seems to be done by statisticians with little interest in, and often no access to, the substantive material. So they have no qualitative criteria on which to base their DIF acceptance/rejection decisions. The result is that the number of items with DIF is grossly over-reported (Hills J.R. (1989) Screening for potentially biased items in testing programs. Educational Measurement: Issues and practice. 8(4) pp. 5-11).


Help for Winsteps Rasch Measurement Software: www.winsteps.com. Author: John Michael Linacre

For more information, contact info@winsteps.com or use the Contact Form
 

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download

State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials

 

Forum Rasch Measurement Forum to discuss any Rasch-related topic

Click here to add your email address to the Winsteps and Facets email list for notifications.

Click here to ask a question or make a suggestion about Winsteps and Facets software.

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments, George Engelhard, Jr. & Stefanie Wind Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez
Winsteps Tutorials Facets Tutorials Rasch Discussion Groups

 


 

 
Coming Rasch-related Events
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
Jan. 22-24, 2018, Mon-Wed. In-person workshop: Rasch Measurement for Everybody en español (A. Tristan, Winsteps), San Luis Potosi, Mexico. www.ieia.com.mx
April 10-12, 2018, Tues.-Thurs. Rasch Conference: IOMW, New York, NY, www.iomw.org
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 22 - 24, 2018, Tues.-Thur. EALTA 2018 pre-conference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank Weiss-Motz), https://ealta2018.testdaf.de
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 27 - 29, 2018, Wed.-Fri. Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France., https://measurement2018.sciencesconf.org
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 25 - July 27, 2018, Wed.-Fri. Pacific-Rim Objective Measurement Symposium (PROMS), (Preconference workshops July 23-24, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences" www.promsociety.org
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Sept. 3 - 6, 2018, Mon.-Thurs. IMEKO World Congress, Belfast, Northern Ireland www.imeko2018.org
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

 

Our current URL is www.winsteps.com

Winsteps® is a registered trademark
 


 
Concerned about aches, pains, youthfulness? Mike and Jenny suggest Liquid Biocell