Table 30.2, 30.3 DIF bias/interaction = Item measures for person classes

Table 30 supports the investigation of item bias, Differential Item Functioning (DIF), i.e., interactions between individual items and types of persons. Specify DIF= for person classifying indicators in person labels. Item bias and DIF are the same thing.  The item measures by person class are plotted in the DIF Plot.


In Table 30.1 - the hypothesis is "this item has the same difficulty for two groups"
In Table 30.2, 30.3 - the hypothesis is "this item has the same difficulty as its average difficulty for all groups"

In Table 30.4 - the hypothesis is "this item has no overall DIF across all groups"


Example output:

You want to examine item bias (DIF) between Females and Males in Exam1.txt. You need a column in your Winsteps person label that has two (or more) demographic codes, say "F" for female and "M" for male (or "0" and "1" if you like dummy variables) in column 9.


Table 30.1 is best for pairwise comparisons, e.g., Females vs. Males. Use Table 30.1 if you have two classes.


Table 30.2 or Table 30.3 are best for multiple comparisons, e.g., regions against the national average. Table 30.2 sorts by class then item. Table 30.3 sorts by item then class.



This displays a list of the local difficulty/ability estimates underlying the paired DIF analysis. These can be plotted directly from the Plots menu.


DIF class specification identifies the columns containing DIF classifications, with DIF= set to @GENDER using the selection rules.


The DIF effects are shown ordered by CLASS within item (column of the data matrix).


KID CLASS identifies the CLASS of persons. KID is specified with PERSON=, e.g., the first CLASS is "F"

OBSERVATIONS are what are seen in the data

COUNT is the number of observations of the classification used for DIF estimation, e.g., 18 non-extreme F persons responded to TAP item 1.

AVERAGE is the average observation on the classification, e.g., 0.89 is the p-value, proportion-correct-value, of item 4 for F persons.
COUNT * AVERAGE = total score of person class on the item

BASELINE is the prediction without DIF

EXPECT is the expected value of the average observation when there is no DIF, e.g., 0.92 is the expected proportion-correct-value for F without DIF.

MEASURE is the what the overall measure would be without DIF, e.g., -4.40 is the overall item difficulty of item 4 as reported in Table 14.

DIF: Differential Item Functioning

DIF SCORE is the difference between the observed and the expected average observations, e.g., 0.92 - 0.89= -0.03

DIF MEASURE is the item difficulty for this class, e.g., item 4 has a local difficulty of -3.93 for CLASS F.

The average of DIF measures across CLASS for an item is not the BASELINE MEASURE because score-to-measure conversion is non-linear. ">" (maximum score), "<" (minimum score) indicate measures corresponding to extreme scores.

DIF SIZE is the difference between the DIF MEASURE for this class and the BASELINE DIFFICULTY, i.e., -3.93 - -4.40 = .48. Item 4 is .48 logits more difficult for class F than expected.

DIF S.E. is the approximate standard error of the difference, e.g., 0.89 logits

DIF t is an approximate Student's t-statistic test, estimated as DIF SIZE divided by the DIF S.E. with a little less than (COUNT-1) degrees of freedom.

Prob. is the two-sided probability of Student's t. See t-statistics.


These numbers are plotted in the DIF plot. Here item 4 is shown. The y-axis is the "DIF Measure".



Example: Where do I extract appropriate difficulties for my classes for both items that exhibit DIF and those that don't?


The DIF-sensitive difficulties are shown as "DIF Measure" in Table 30.1. They are more conveniently listed in Table 30.2. The "DIF Size" in Table 30.2 or Table 30.3 shows the size of the DIF relative to the overall measure in the IFILE=.


To apply the DIF measures as item difficulties, you would need to produce a list of item difficulties for each group, then analyze that group (e.g., with PSELECT=) using the specified list of item difficulties as an anchor file (IAFILE=).


My approach would be to copy Table 30.3 into an Excel spreadsheet, then use "Data", "Text to Columns" to put each Table column into a separate Excel column. The anchor file would have the item number in the first column, and either the overall "baseline measure" or the group "DIF measure" in the second column. Then copy and paste these two columns into a .txt anchor file.

Help for Winsteps Rasch Measurement Software: Author: John Michael Linacre

For more information, contact or use the Contact Form

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download

State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
Rasch, Winsteps, Facets online Tutorials


Forum Rasch Measurement Forum to discuss any Rasch-related topic

Click here to add your email address to the Winsteps and Facets email list for notifications.

Click here to ask a question or make a suggestion about Winsteps and Facets software.

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments, George Engelhard, Jr. & Stefanie Wind Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez
Winsteps Tutorials Facets Tutorials Rasch Discussion Groups



Coming Rasch-related Events
April 10-12, 2018, Tues.-Thurs. Rasch Conference: IOMW, New York, NY,
April 13-17, 2018, Fri.-Tues. AERA, New York, NY,
May 22 - 24, 2018, Tues.-Thur. EALTA 2018 pre-conference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank Weiss-Motz),
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 27 - 29, 2018, Wed.-Fri. Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France.,
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
July 25 - July 27, 2018, Wed.-Fri. Pacific-Rim Objective Measurement Symposium (PROMS), (Preconference workshops July 23-24, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences"
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Sept. 3 - 6, 2018, Mon.-Thurs. IMEKO World Congress, Belfast, Northern Ireland
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),



Our current URL is

Winsteps® is a registered trademark

Mike L.'s Wellness Report: Effective weight loss program? The Mediterranean Diet, especially the M3 version