Bias interaction DIF DPF DRF estimation |
After estimating the measures, Facets checks to see if any Model= specifications include Bias specifiers, "B". If so, for each such model, the specified Bias interaction is estimated for all the data (not just the data matching that particular model). Bias can be due to any type of interaction including Differential Item Functioning DIF, Differential Person Functioning DPF, Differential Rater Functioning DRF.
This is done by iterating through the data again and, after convergence, doing one further iteration to calculate statistics.
Computation of interactions is a two-stage process.
1. The measures for the elements, and the structure of the rating scale, are estimated. Then those values are anchored (fixed, held constant).
2. The expected values of the observations are subtracted from the observed values of the observations, producing the residuals.
3. The residuals corresponding to each interaction term (e.g., examinees rated by judge 4) are summed. If this sum is not zero, then there is an interaction.
4. The size of the interaction is estimated. A first approximation is:
Interaction (logits) = (sum of residuals) / (sum of the statistical-information in the observations).
Algebraically, first the Bn, Di, Cj, Fk are estimated using a Rasch model such as:
log ( Pnijk / Pnij(k-1)) = Bn - Di - Cj - Fk
Then the Bn, Di, Cj, Fk are anchored, and the bias/interaction terms, e.g., Cij, are estimated:
log ( Pnijk / Pnij(k-1)) = ( Bn - Di - Cj - Fk ) - Cij
Thus the Cij are estimated from the residuals left over from the main analysis. The conversion from residual score to bias interaction size is non-linear. Bias sizes may not sum to zero.
Bias, (also called interaction, differential item function, differential person function, etc.,) estimation serves several purposes:
1) in diagnosing misfit:
The response residuals are partitioned by element, e.g., by judge-item pairs, and converted into a logit measure. Estimates of unexpected size and statistical significance flag systematic misfit, focusing the misfit investigation.
2) in investigating validity:
A systematic, but small, bias in an item or a judge, for or against any group of persons, may be overwhelmed by the general stochastic component in the responses. Consequently it may not be detected by the usual summary fit statistics. Specifying a bias analysis between elements of facets of particular importance provides a powerful means of investigating and verifying the fairness and functioning of a test.
3) in assessing the effect of bias:
Since bias terms have a measure and a standard error (precision), their size and significance (t-statistic) are reported. This permits the effect of bias to be expressed in the same frame of reference as the element measures. Thus each element measure can be adjusted for any bias which has affected its estimation, e.g., by adding the estimate of bias, which has adversely affected an element, to that element's logit measure. Then the practical implications of removing bias can be determined. Does adjustment for bias alter the pass-fail decision? Does adjustment for bias affect the relative performance of two groups in a meaningful way?
4) in partitioning unexplained "error" variance:
The bias logit sample standard deviation corrected for its measurement error, can be an estimate of the amount of systematic error in the error variance (RMSE).
e.g., for a bias analysis of judges,
Bias logit S.D. = 0.47, mean bias S.E. = 0.32 (Table 13),
so "true" bias S.D. = (0.47² - 0.32²) = 0.35 logits,
but, this exceeds the RMSE for judges = 0.12 (Table 7).
Here, locally extreme judge-person scores cause an overestimation of systematic bias.
Adjusting for bias:
A straight-forward approach is to define the biased element as two elements: one element for one subset of judges (examinees, etc.) and a different items for the other subset of judges (examinees, etc.). This can done by defining an extra item element, and then adjusting item references in the data file accordingly.
Example:
Facets = 4 ; Items, candidates, examiners, bias adjustment
Non-center = 2 ; candidates float
Models =
?, 28, 17, 1, myscale ; allow for bias adjustment between candidate 28 and examiner 17
?, ?, ?, 2, myscale
*
Rating scale = myscale, R9
Labels=
1, Items
...
2, Candidates
...
3, Examiners
....
4, Bias adjustment, A
1, 28-17 adjustment ; the bias will be absorbed by this element, relative to element 2.
2, Everyone else, 0
*
Data=
1_5, 28, 17, 1, 1,2,3,4,5
1_5, 29, 23, 2, 5,4,3,2,1
.....
Non-Uniform DIF
Create a dummy facet for ability levels: then a three-way interaction between
person group dummy facet, ability level dummy facet, and item (or whatever).
Models=?,?B,?B,?,?B,R3 (or whatever)
Paired-Comparison Bias/Interaction Analysis:
Facets produces meaningful numbers in the Bias/interaction analysis when:
1) Use mirrored data, but set the weight = 1.0, instead of 0.5. For the main analysis, use weight 0.5.
2) Arrange the data so that the Models= is ..., -?,?,... instead of ...,?,-?,...
Help for Facets (64-bit) Rasch Measurement and Rasch Analysis Software: www.winsteps.com Author: John Michael Linacre.
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation Minifac download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download |
---|
![]() |
Forum: | Rasch Measurement Forum to discuss any Rasch-related topic |
---|
Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com |
---|
State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials |
---|
Coming Rasch-related Events | |
---|---|
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Our current URL is www.winsteps.com
Winsteps® is a registered trademark