Pivot anchoring

There is usually no problem defining the item difficulty of a standard dichotomous (right/wrong) item. It is the location on the latent variable where there is a 50% chance of success on the item.

 

Combining dichotomous items makes a polytomous super-item. But how do we define the difficulty of a super-item? Since the difficulty of a dichotomous item is the location on the latent variable where the top and bottom categories are equally probably (= 0.5), we apply the same logic to the super-item. Its difficulty is the location on the latent variable where the top and bottom categories are equally probably (= ?). But this definition does not make sense in every situation. So we may need to choose another definition.

 

For instance, if a super-item is a combination of 3 dichotomous items (so that its possible scores = 0,1,2,3) , we might define its difficulty as the location on the latent variable where the expected score on the super-item is 1.5. Or the location on the latent variable where scores of 1 and 2 are equally probable. Or the location on the latent variable where the expected score is 1.0 or maybe 2.0. Or .....

 

For these alternative definitions, we need to compute the distance of the chosen location from the standard location and then apply that distance to the item difficulty using "pivot anchoring" implemented in Winsteps with SAFILE=. We can usually discover the distance we want by looking at the GRFILE= output.

 

The procedure is:

(1) Analyze the data without pivot-anchoring

(2) Output SFILE=sf.txt which contains the standard Andrich thresholds

(3) Output GRFILE=gr.txt which contains the values connected with all the scores and probabilities on the item

(4) Identify the logit value corresponding to the desired location on the latent variable = M

(5) Subtract M from all the values for the super-item in SFILE=sf.txt

(6) The adjusted SFILE= is now specified as SAFILE=sf.txt, the pivot-anchor file

(7) Analyze the data with pivot-anchoring

( The difficulty of the super-item should now have changed by the specified value, M

 


 

Pivots are the locations in the dichotomy, rating (or partial credit) scale at which the categories would be dichotomized, i.e., the place that indicates the transition from "bad" to "good", "unhealthy" to "healthy". Ordinarily the pivot is placed at the point where the highest and lowest categories of the response structure are equally probable. Pivot anchoring redefines the item measures. The effect of pivot anchoring is to move the reported difficulty of an item relative to its rating scale structure. It makes no change to the fit of the data to the model or to the expected observation corresponding to each actual observation.

 

Dr. Rita Bode's procedure works well. The idea is to align the item difficulties so that the cut-point for each item (equivalent to the dichotomous item difficulty) is located at the reported item difficulty on the latent variable. So, we do an arithmetic sleight of hand. In the original analysis, we look at a Table such as Table 2.2 and see where along the line (row) for each item is the substantive cut-point (pass-fail point, benchmark, etc.) We note down its measure value on the latent variable (x-axis of Table 2.2).

 

Then, for each item, we compare the measure value new item difficulty with its reported item difficulty. The difference is the amount we need to shift the Andrich thresholds for that item. Here is the computation:

New average thresholds (excluding bottom "0") = Old item difficulty + Old average thresholds (excluding bottom "0") - Provisional New item difficulty

 

If the analysis is unanchored, Winsteps will maintain the average difficulty of the items:

New item difficulty = Provisional New item difficulty - Average(Provisional New item difficulty) + Average(Old item difficulty)

 

Person measures with complete response strings will usually have very small or no changes.

 

Since this can become confusing. It is usually easiest to :

1) output the SFILE= from the original analysis to Excel.

2) add (item difficulty - cut-point) to the threshold values for each item. Example: we want to subtract 1 logit from the item difficulty to move the item difficulty to the cut-point on the latent variable. We add 1 logit to all the thresholds for an item, then Winsteps will subtract 1 logit from the item's difficulty.

3) Copy-and-paste the Excel SFILE= values into the Winsteps control file between SAFILE=* and *

4) Since each item now has different threshold values: ISGROUPS=0

5) This procedure should make no change to the person measures.

 

In Rita Bode's approach, we have a target item difficulty ordering. Usually most items are already in that order, but a few items are out of order. These out-of-order item need to be pivot-anchored to place them correctly in the item hierarchy.

i)  Output Table 13, the original ordering of the items.

ii) Move the item rows up and down to give the desired ordering.

iii) For items that are already in order (usually more than half the items), there is no change to SFILE= in the SAFILE=.

iv) For the other items, change the SFILE= values enough to locate those items in the correct position. Example, to increase the item difficulty by one logit, decrease the thresholds from the SAFILE= to the SAFILE= by one logit.

 

See also SAFILE=. PIVOT= was an earlier, unsuccessful attempt to automate this procedure.

 

For polytomies:

 

1) from your original analysis, with your GROUPS= (if any) and no SAFILE=, output an SFILE=

 

2) build an SAFILE=

 

a) for each item, use the SFILE= value for its group

 

b) add the pivot anchor value to the SFILE= value

 

c) include the new set of values for the item in the SAFILE=. There must be entries in SAFILE= for every item. Use the SFILE= values directly if there is no change

 

Example 1: SFILE= for the group with item 1:

1 0     .00

1 1    -.85

1 2     .85

 

We want to add one logit for item 1, two logits for item 2, no change for item 3

 

SAFILE=*

1 0     1.00 ; this is a placeholder, but is convenient to remind us of the pivot value

1 1     0.15

1 2     1.85

2 0     2.00

2 1     2.15

2 2     2.85

3 0     0.00

3 1     -.85

3 2     0.85

*

 

3) do the analysis again with ISGROUPS=0 and SAFILE=* ... *

 

4) The person measures will shift by the average of the pivot values.

 

Example 2: We can plot the IRF/ICC ogives for every PCM item on a graph. This gives a complete story of expected responses on all the items for each person on the x-axis, but it does not gives us a useful depiction of the latent variable. When explaining the latent variable in terms of an item hierarchy, we need to take a horizontal slice across all the PCM ICCs/IRFs somewhere around the middle of the rating scale. Pivot anchoring lets us do this.

 

For instance, the rating scale may be a Likert scale from 1 to 5 with 4=Agree. We may decide that the "pivot" point on the IRF of each item is where the expected score = 4 (which is also the point where category 4=Agree has the highest probability of being observed). So we need to redefine the item difficulty of each PCM item away from its default value (the location where the highest and lowest categories are equal probable) to the location where the expected score on the item = 4. We do this using pivot-anchoring. Here is the procedure:

 

1. Do a standard PCM analysis (ISGROUPS=0). For each item, Table 3.2, etc., reports the item difficulty + Andrich threshoolds: Di + {Fij}

2. Output IFILE= the item difficulty for each item = Di.   Example: D=2.5 logits

3. Output ISFILE= the location on the latent variable where the expected score on the item is 4 = Ei. Example Ei = 3.0 logits

4. Compute the desired shift of location for each item = Ei - Di  = 3.0-2.5 = 0.5 logits

5. Output SFILE= the Andrich thresholds (relative to the item difficulties) for each item = {Fij}. Example: -2, -1, 0, 3

6. Subtract the shift from the Andrich thresholds {Sij} = {Fij- (Ei - Di)}. Example: {Sij} = -2.5, -1.5, -0.5, -2.5

7. Anchor (fix) the Andrich thresholds SAFILE= for each item at their {Sij} values.

8. Rerun the analysis. Item difficulties, D'i, are now located at the points where the expected scores on the items are 4.

Example: Di + Fij = D'i + Sij for each threshold of each item. In our example:

D'i = Di + Fij - Sij = 2.5 + {-2, -1, 0, 3} - {-2.5, -1.5, -0.5, -2.5} = 2.5 + {0.5, 0.5, 0.5, 0.5} = 3.0, the desired location.


Help for Winsteps Rasch Measurement and Rasch Analysis Software: www.winsteps.com. Author: John Michael Linacre

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Minifac download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download

Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
As an Amazon Associate I earn from qualifying purchases. This does not change what you pay.

facebook Forum: Rasch Measurement Forum to discuss any Rasch-related topic

To receive News Emails about Winsteps and Facets by subscribing to the Winsteps.com email list,
enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Winsteps.com
The Winsteps.com email list is only used to email information about Winsteps, Facets and associated Rasch Measurement activities. Your email address is not shared with third-parties. Every email sent from the list includes the option to unsubscribe.

Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com


State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials


 

 
Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 7, 2024, Mon.-Wed. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

 

Our current URL is www.winsteps.com

Winsteps® is a registered trademark