SAFILE= structure-threshold input anchor file

The SFILE= (not ISFILE=, but see PCM anchoring ) of one analysis may be used unedited as the SAFILE= of another.

 

The rating-scale structure parameter values (taus, Rasch-Andrich thresholds, steps) can be anchored (fixed) using SAFILE=. The anchoring option facilitates test form equating. The structure in the rating (or partial credit) scales of two test forms, or in the item bank and in the current form, can be anchored at their other form or bank values. Then the common rating (or partial credit) scale calibrations are maintained. Other measures are estimated in the frame of reference defined by the anchor values. Use IAFILE= and SAFILE= if you need the polytomous item in one analysis to be identical in thresholds and overall difficulty to the same item in another analysis. Use only SAFILE= if you need the polytomous item in one analysis to be identical in thresholds to the same item in another analysis, but the overall item difficulties can differ.

 

SAFILE= file name

file containing details

SAFILE = *

in-line list

SAFILE = ?

opens a Browser window to find the file

No ISGROUPS= or all items in one group


(bottom category) 0

example: 0 0

place holder for bottom category of rating scale in case it is not observed in the data

(category number) (anchor value)

example: 2 1.5

Andrich threshold (step calibration) between categories 1 and 2 is anchored at 1.5 for all items (unless overridden)

ISGROUPS= specifies more than one group of items or PCM


(item number) (bottom category) 0

example: 34 0 0  ; place holder for bottom category

For item 34 and all items in the same ISGROUPS= group, place holder for bottom category in case it is not observed in the data

(item number) (category number) (anchor value)

example: 34 2 1.5 ; Andrich threshold

For item 34 and all items in the same ISGROUPS= group, Andrich threshold (step calibration) between categories 1 and 2 is anchored at 1.5

(item number-item number) (category number) (anchor value)

example: 34-39 2 1.5

For items 34 to 39 and all items in the same ISGROUPS= group(s), Andrich threshold (step calibration) between categories 1 and 2 is anchored at 1.5

(1-NI=)  (category number) (anchor value)

example: 1-47 1 2.0

Specify the default value for a category threshold for all items (and so for all ISGROUPS= groups)

*

end of list

 

In order to anchor category structures, an anchor file must be created of the following form:

1. Use one line per category Rasch-Andrich threshold to be anchored.

2. If all items use the same rating scale (i.e. ISGROUPS=" ", the standard, or you assign all items to the same grouping, e.g ISGROUPS=222222..), then type the category number, a blank, and the "structure measure" value (in logits or your user-rescaled units) at which to anchor the Rasch-Andrich threshold measure corresponding to that category (see Table 3.2). Arithmetical expressions are allowed.

3. If you wish to force category 0 to stay in an analysis, anchors its calibration at 0. Specify SAITEM=Yes to use the multiple ISGROUP= format

   or
If items use different rating (or partial credit) scales (i.e. ISGROUPS=0, or items are assigned to different groupings, e.g ISGROUPS=122113..), then type the sequence number of any item belonging to the grouping, a blank, the category number, a blank, and the "structure measure" value (in logits if USCALE=1, otherwise your user-rescaled units) at which to anchor the Rasch-Andrich threshold up to that category for that grouping. If you wish to force category 0 to stay in an analysis, anchor its calibration at 0.

 

This information may be entered directly in the control file using SAFILE=*

 

Anything after ";" is treated as a comment.

 

Example 1: Dichotomous: A score of, say, 438 means that you have 62% odds (and not 50% as it is default in Winsteps/Ministep!) of answering correctly to a dichotomous item of difficulty 438. How can I set this threshold from 50% to 62%?

 

In your control file, include:

UASCALE=1 ; anchoring is in logits

SAFILE=* ; anchors the response structure

0 0 ; place holder for bottom category

1 -0.489548225   ; ln((100%-62%)/62%)

*

When you look at Table 1, you should see that the person abilities are now lower relative to the item difficulties.

 

Example 2: Polytomous: A score of, say, 438 means that you have 62% expectation of answering correctly to a polytomous item (0-1-2-3) of difficulty 438. How can I set the thresholds to 62%?

 

The default item difficulty for a polytomy is the point where the lowest and highest categories are equally probable. We need to make a logit adjustment to all the category thresholds equivalent to a change of difficulty corresponding to a rating of .62*3 = 1.86.
This is intricate:
1. We need the current set of Rasch-Andrich thresholds (step calibrations) = F1, F2, F3.
2. We need to compute the measure (M) corresponding to a score of 1.86 on the rating scale
3. Then we need to anchor the rating scale at:
SAFILE=*
0 0 ; place holder for bottom category
1 F1 - M
2 F2 - M
3 F3 - M
*
 
An easy way to obtain M is to produce Winsteps "Output Files" menu, GRFILE= and then look up the Measure for the Score you want.

 

Example 3: A rating scale, common to all items, of three categories numbered 2, 4, and 6, is to be anchored at pre-set calibrations. The calibration of the Rasch-Andrich threshold from category 2 to category 4 is -1.5, and of the Rasch-Andrich threshold to category 6 is +1.5.

1. Create a file named, say, "STANC.FIL"

2. Enter the lines

 2 0  ; place holder for bottom category of this rating scale

 4 -1.5  ; Rasch-Andrich threshold from category 2 to category 4, anchor at -1.5 logits

 6 1.5  ; Rasch-Andrich threshold from category 4 to category 6, anchor at +1.5 logits

 

Note: categories are calibrated pair-wise, so the Rasch-Andrich threshold values do not have to advance.

 

3. Specify, in the control file,

 ISGROUPS=" "   (the standard)

 SAFILE=STANC.FIL  structure anchor file

 

 or, enter directly in the control file,

 SAFILE=*

 4 -1.5

 6 1.5

 *

If you wish to use the multiple grouping format, i.e., specify an example item, e.g., 13

 SAITEM=YES

 SAFILE=*

 13 4 -1.5

 13 6 1.5

 *

 

To check this: "A" after the Andrich threshold measure

 

+------------------------------------------------------------------

|CATEGORY   OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|

|LABEL SCORE COUNT %|AVRGE EXPECT|  MNSQ  MNSQ||THRESHOLD| MEASURE|

|-------------------+------------+------------++---------+--------+

|  4   4     620  34|   .14   .36|   .87   .72||   -1.50A|    .00 |

 

Example 4: A partial credit analysis (ISGROUPS=0) has a different rating scale for each item. Item 15 has four categories, 0,1,2,3 and this particular response structure is to be anchored at pre-set calibrations.

1. Create a file named, say, "PC.15"

2. Enter the lines

   15 0 0 Bottom categories are always at logit 0

   15 1 -2.0  item 15, Rasch-Andrich threshold to category 1, anchor at -2 logits

   15 2 0.5

   15 3 1.5

3. Specify, in the control file,

   ISGROUPS=0

   SAFILE=PC.15

   IAFILE= file of item calibrations

 

Example 5: A grouped rating scale analysis (ISGROUPS=21134..) has a different rating scale for each grouping of items. Item 26 belongs to grouping 5 for which the response structure is three categories, 1,2,3 and this structure is to be anchored, but the difficulties of the individual items are to be re-estimated.

1. Create a file named, say, "GROUPING.ANC"

2. Enter the lines

 26 2 -3.3  for item 26, representing grouping 5, Rasch-Andrich threshold to category 2, anchored at -3.3

 26 3 3.3

3. Specify, in the control file,

 ISGROUPS =21134..

 SAFILE=GROUPING.ANC

 ; there is no IAFILE= because we want to re-estimate the item difficulties

 

Example 6: A partial-credit scale has an unobserved category last time, but we want to use those anchor values where possible.

We have two choices.

 

a) Treat the unobserved category as a structural zero, i.e., unobservable. If so...

Rescore the item using IVALUE=, removing the unobserved category from the category hierarchy, and use a matching SAFILE=.

 

In the run generating the anchor values, which had STKEEP=NO,

+------------------------------------------------------------------

|CATEGORY   OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|

|LABEL SCORE COUNT %|AVRGE EXPECT|  MNSQ  MNSQ||THRESHOLD| MEASURE|

|-------------------+------------+------------++---------+--------+

|  1   1      33   0|  -.23  -.15|   .91   .93||  NONE   |(  -.85)| 1

|  2   2      23   0|   .15   .05|   .88   .78||   -1.12 |   1.44 | 2

|  4   3       2   0|   .29   .17|   .95   .89||    1.12 |(  3.73)| 4

|-------------------+------------+------------++---------+--------+

 

In the anchored run:

IREFER=A...... ; item 1 is an "A" type item

CODES=1234  ; valid categories

IVALUEA=12*3  ; rescore "A" items from 1,2,4 to 1,2,3

SAFILE=*

1  1     .00  ; place holder for bottom category

1  2   -1.12

1  3    1.12

*

 

If the structural zeroes in the original and anchored runs are the same then, the same measures would result from:

STKEEP=NO

SAFILE=*

1  1     .00  ; place holder for bottom category

1  2   -1.12

1  4    1.12

*

 

b) Treat the unobserved category as an incidental zero, i.e., very unlikely to be observed.

Here is Table 3.2 from the original run which produced the anchor values. The NULL indicates an incidental or sampling zero.

 

+------------------------------------------------------------------

|CATEGORY   OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|

|LABEL SCORE COUNT %|AVRGE EXPECT|  MNSQ  MNSQ||THRESHOLD| MEASURE|

|-------------------+------------+------------++---------+--------+

|  1   1      33   0|  -.27  -.20|   .91   .95||  NONE   |(  -.88)| 1

|  2   2      23   0|   .08  -.02|   .84   .68||    -.69 |    .72 | 2

|  3   3       0   0|            |   .00   .00||  NULL   |   1.52 | 3

|  4   4       2   0|   .22   .16|   .98   .87||     .69 |(  2.36)| 4

|-------------------+------------+------------++---------+--------+

 

Here is the matching SAFILE=

 

SAFILE=*

1  1     .00  ; place holder for bottom category

1  2    -.69

1  3   46.71 ; flag category 3 with a large positive value, i.e., unlikely to be observed.

1  4  -46.02 ; maintain sum of Andrich thresholds (step calibrations) at zero.

*

 

Example 7:  Partial-credit Item difficulty is are to be set at an expected score of 1.3333 for an item scored 0,1,2 :

1. Do the standard analysis with UPMEAN=0. Center the person abilities, so we can see the change in item difficulties later.

2. Output the SFILE= to Excel of Rasch-Andrich thresholds

3. Output the GRFILE= to Excel of the ICCs, item characteristic curves

4. From the GRFILE=, discover the measure for each item corresponding to 1.3333

5. Output the IFILE= to Excel showing the item difficulties

6. Subtract the item difficulty from the 1.3333 measure. This is the necessary shift in the item difficulty

7. Subtract this shift from every threshold for the item in the SFILE=

8. Copy-and-paste-text the shifted thresholds into a text-file SAFILE=

9. Reanalyze the data with the SAFILE= and UPMEAN=0. All the item difficulties should now be at their RP67 values, relative to the mean of the person abilities.

 

Example 8: Score-to-measure Table 20 is to be produced from known item and rating scale structure difficulties.

Specify:

IAFILE=  ; the item anchor file

SAFILE= ; the structure/step anchor file (if not dichotomies)

CONVERGE=L  ; only logit change is used for convergence

LCONV=0.005  ; logit change too small to appear on any report.

STBIAS=NO ; anchor values do not need estimation bias correction.

The data file comprises two dummy data records, so that every item has a non extreme score, e.g.,

For dichotomies:

 Record 1: 10101010101

 Record 2: 01010101010

 

 For a rating scale from 1 to 5:

 Record 1: 15151515151

 Record 2: 51515151515

 


 

Redefining the Item Difficulty of Rating Scale items:

 

We want to define the difficulty of an item as 65% success on the item, instead of the usual approximately 50% success.

 

1.Suppose we have these Rasch-Andrich thresholds (step calibrations) from a standard rating-scale analysis:

 

Category

Rasch-Andrich Threshold

1

(0.00)

2

-.98

3

-.25

4

1.22

 

2. The item score range is 1-4, so

a) we need the relative measure corresponding to an expected score of 65% on the item = 1+ (4-1)*0.65 = 2.95

 

3. We look at the GRFILE= and see that the measure corresponding to an expected score of 2.95 is about 0.58 (we can verify this by looking at the Graphs window, Expected score ICC)

 

ITEM

MEAS

SCOR

INFO

0

1

2

3

1

.48

2.89

.67

.05

.23

.48

.23

1

.56

2.94

.65

.05

.22

.49

.25

1

.64

2.99

.63

.04

.20

.49

.27

 

4. We want the item difficulty to correspond to 65% success instead of its current approximately 50% correct. So we have raised the bar for the item. The item is to be reported as about 0.57 logits more difficult.

 

5. To force the item to be reported as 0.57 logits more difficult, we need the Andrich thresholds (step calibrations) to be 0.57 logits easier = -0.57 logits.

 

Category

Rasch-Andrich Threshold

1

(0.00)

2

-.98  + -.57 = -1.55

3

-.25  + -.57 = -.82

4

1.22  + -.57 = .64

 

6. Now, since the item mean remains 0, all the person measures will be reduced by 0.57 logits relative to their original values.

 


 

Dichotomies (MCQ, etc.) Mastery Levels:

Example 9: To set mastery levels at 75% on dichotomous items (so that maps line up at 75%, rather than 50%), we need to adjust the item difficulties by ln(75/(100-75)) = 1.1 logits.

SAFILE=*

0 0  ; place holder for bottom category

1 -1.1 ; set the Rasch-Andrich threshold point 1.1 logits down, so that the person ability matches item difficulty at 75% success.

  ;  If you are using USCALE=, then the value is -1.1 * USCALE=

*

 

Similarly for 66.67% success or 66.67% master level:  ln(66.67/(100-66.67)) = 0.693 logits.

SAFILE=*

0 0

0 -0.6931 ; notice that this is negative

*

 

Similarly for 65% success or 65% master level:  ln(65/(100-35)) = 0.691logits.

SAFILE=*

0 0  ; place holder for bottom category

0 -0.691 ; notice that this is negative

*

 

Polytomies (rating scales, partial credit, etc.:

When a variety of rating (or partial credit) scales are used in an instrument, their different formats perturb the item hierarchy. This can be remedied by choosing a point along each rating (or partial credit) scale that dichotomizes its meaning (not its scoring) in an equivalent manner. This is the pivot point. The effect of pivoting is to move the structure calibrations such that the item measure is defined at the pivot point on the rating (or partial credit) scale, rather than the standard point (at which the highest and lowest categories are equally probable).

 

Example 1. Anchoring polytomous items for the Rating Scale Model

 

CODES = 012 ; 3 category Rating Scale Model

IAFILE=*

0 0  ; place holder for bottom category

1 2.37 ; anchor item 1 at 2.37 logits

2 -1.23

*

 

SAFILE=*

0 0 ; the bottom category is always anchored at 0

1 -2.34 ; Andrich threshold (step difficulty) from category 0 to 1

2 2.34 ; Andrich threshold (step difficulty) from category 2 to 3

*

 

Example 2. Anchoring polytomous items for the Partial Credit and Grouped-Items models

 

CODES = 012 ; 3 category Rating Scale Model

ISGROUPS=0

IAFILE=*

0 0  ; place holder for bottom category

1 2.37 ; anchor item 1 at 2.37 logits

2 -1.23

*

 

SAFILE=*

; for item 1, relative to the difficulty of item 1

1 0 0 ; the bottom category is always anchored at 0

1 1 -2.34 ; Andrich threshold (step difficulty) from category 0 to 1

1 2 2.34 ; Andrich threshold (step difficulty) from category 2 to 3

; for item 2, relative to the difficulty of item 2

2 0 0 ; the bottom category is always anchored at 0

2 1 -1.54 ; Andrich threshold (step difficulty) from category 0 to 1

2 2 1.54 ; Andrich threshold (step difficulty) from category 2 to 3

*

 

Example 3. For item 47, categories 3 and 5are the most probable and I would like the item difficulty to be based on the intersect of these two probability curves. For item 123, categories 8 and 9 are the most probable... how do I specify this with SAFILE=...?"

 

The items must be in different item groups. Simplest is the Partial Credit Model so that each item has its own rating-scale structure:

ISGROUPS=0

 

SAFILE=*

; the intersection of categories 3 and 5 is not a Rasch parameter, so you need to:

; 1. look at Winsteps GRFILE=

; 2. find the measure for item 47 where categories 3 and 5 are equally probable = M35

; 3. find the measure for item 47 where categories 4 and 5 are equally probable = M45

; 4. compute xxx.xx = M45-M35

47 5 xxx.xx ; distance of reference point 3-5 from 4-5 threshold

 

123 9 0 ; threshold between categories 8 and 9 is set at 0

*

 

Here is a general procedure.

Use ISGROUPS=

Do an unanchored run, make sure it all makes sense.

Write out an SFILE=structure.txt  of the rating scale (partial credit) structures.

 

Calculate, for each item, the amount that you want the item difficulty to move. Looking at the Graphs menu or Table 2 may help you decide.

 

Make this amount of adjustment to every value for the item in the SFILE=*

So, suppose you want item 3 to be shown as 1 logit more difficult on the item reports.

The SFILE=structure.txt is

3 0 0.0  ; place holder for bottom category

3 1 -2.5

3 2 -1.0

...

*

Change this to (add 1 to the values for 1 logit more difficult)

3 0 0  ; place holder for bottom category

3 1 -1.5

3 2 -0.0

...

*

This becomes the SAFILE=structure.txt of the pivoted analysis.

 

Example 10: Pivoting with ISGROUPS=. Positive (P) items pivot at an expected score of 2.5. Negative (N) items at an expected score of 2.0

ISGROUPS=PPPPPNNNNN

SAFILE=*

1 2 0.7 ; put in the values necessary to move the center to the desired spot

5 2 0.5 ; e.g., the "structure calibration" - "score-to-measure of pivot point"

*

 

Example 11:  To set a rating (or partial credit) scale turning point: In the Liking for Science, with 0=Dislike, 1=Neutral, 2=Like, anything less than an expected score of 1.5 indicates some degree of lack of liking:

SAFILE=*

1 -2.22 ; put in the Andrich threshold (step calibration) necessary to move expected rating of 1.5 to the desired spot

*

 

RATING SCALE PIVOTED AT 1.50

+------------------------------------------------------------------

|CATEGORY   OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|

|LABEL SCORE COUNT %|AVRGE EXPECT|  MNSQ  MNSQ||THRESHOLD| MEASURE|

|-------------------+------------+------------++---------+--------+

|  0   0     197  22| -2.29 -2.42|  1.05   .99||  NONE   |( -3.42)| dislike

|  1   1     322  36| -1.17  -.99|   .90   .79||   -2.22 |  -1.25 | neutral  

|  2   2     368  41|   .89   .80|   .98  1.29||    -.28 |(   .92)| like

|-------------------+------------+------------++---------+--------+

|MISSING       1   0|   .04      |            ||         |        |

+------------------------------------------------------------------

AVERAGE MEASURE is mean of measures in category.

 

+-------------------------------------------------------------------+

|CATEGORY   STRUCTURE    |  SCORE-TO-MEASURE   |CUMULATIV| COHERENCE|

| LABEL   MEASURE   S.E. | AT CAT. ----ZONE----|PROBABLTY| M->C C->M|

|------------------------+---------------------+---------+----------|

|   0      NONE          |( -3.42) -INF   -2.50|         |  63%  44%| dislike

|   1       -2.22    .10 |  -1.25  -2.50    .00|   -2.34 |  55%  72%| neutral

|   2        -.28    .09 |(   .92)   .00  +INF |    -.16 |  84%  76%| like

+-------------------------------------------------------------------+

 

Values of .00 for scores of 1.5 show effect of pivot anchoring on the rating (or partial credit) scale. The structure calibrations are offset.

 

TABLE 21.2 LIKING FOR SCIENCE (Wright & Masters p.18)  sf.out Aug  1 21:31 2000

        EXPECTED SCORE OGIVE: MEANS

       ++------+------+------+------+------+------+------+------++

     2 +                                               2222222222+

       |                                       22222222          |

       |                                   2222                  |

       |                                222                      |

E      |                              22                         |

X  1.5 +                            12                           +

P      |                          11|                            |

E      |                        11  |                            |

C      |                      11    |                            |

T      |                     1      |                            |

E    1 +                   11       |                            +

D      |                 11*        |                            |

       |                1  *        |                            |

S      |              11   *        |                            |

C      |            11     *        |                            |

O   .5 +          01       *        |                            +

R      |        00|        *        |                            |

E      |     000  |        *        |                            |

       |00000     |        *        |                            |

       |          |        *        |                            |

     0 +          |        *        |                            +

       ++------+------+------+------+------+------+------+------++

       -4     -3     -2     -1      0      1      2      3      4

                        PUPIL [MINUS] ACT MEASURE

 

Example 12: A questionnaire includes several rating (or partial credit) scales, each with a pivotal transition-structure between two categories. The item measures are to be centered on those pivots.

1. Use ISGROUPS= to identify the item response-structure groupings.

2. Look at the response structures and identify the pivot point:

e.g., here are categories for "grouping A" items, after rescoring, etc.

Strongly Disagree 1

Disagree  2

Neutral  3

Agree   4

Strongly Agree 5

If agreement is wanted, pivot between 3 and 4, identified as transition 4.

If no disagreement is wanted, pivot between 2 and 3, identified as transition 3.

 

3. Anchor the transition corresponding to the pivot point at 0, e.g., for agreement:

e.g., for

ISGROUPS=AAAAAAABBBBAACCC

SAFILE=*

6 4 0  6 is an item in grouping A, pivoted at agreement (Rasch-Andrich threshold from category 3 into category 4)

8 2 0  8 is an item in grouping B, pivoted at Rasch-Andrich threshold from category 2 into category 3

; no pivoting for grouping C, as these are dichotomous items

*

 

Example 13: Anchor files for dichotomous and partial credit items. Use the IAFILE= for anchoring the item difficulties, and SAFILE= to anchor partial credit structures. Winsteps decomposes the (delta) Dij of partial credit items into Di + Fij.

The Di for the partial credit and dichotomous items are in the IAFILE=

The Fij for the partial credit files are in the SAFILE=

 

Suppose the data are A,B,C,D, and  there are two partial credit items, scored 0,1,2, and two merely right-wrong. 0,1 then: :

CODES=ABCD

KEY1=BCBC      ; SCORE OF 1 ON THE 4 ITEMS

KEY2=DA**        ; SCORE OF 2 ON THE PARTIAL CREDIT ITEMS

ISGROUPS=0

 

If the right-wrong MCQ items are to be scored 0,2, then

CODES=ABCD

KEY1=BC**      ; SCORE OF 1 ON THE 4 ITEMS

KEY2=DABC        ; SCORE OF 2 ON THE PARTIAL CREDIT ITEMS

ISGROUPS=0

 

but better psychometrically is:

CODES=ABCD

KEY1=BCBC      ; SCORE OF 1 ON THE 4 ITEMS

KEY2=DA**        ; SCORE OF 2 ON THE PARTIAL CREDIT ITEMS

IWEIGHT=*

3-4  2          ; items 3 and 4 have a weight of 2.

*

ISGROUPS=0

 

Then write out the item and partial credit structures

IFILE= items.txt

SFILE=pc.txt

 

In the anchored run:

CODES= ... etc.

IAFILE=items.txt

SAFILE=pc.txt

CONVERGE=L  ; only logit change is used for convergence

LCONV=0.005  ; logit change too small to appear on any report.

 

Anchored values are marked by "A" in the Item Tables, and also Table 3.2

 


 

Anchoring with Partial-Credit Delta δij (Dij) values

 

Example:

 

Title = "Partial credit with anchored Dij structures"

;---------------------------

;        STRUCTURE MEASURE (Andrich threshold)  

;       --------------------

;Item i delta_i1    delta_i2

;---------------------------

;Item 1   -3.0      -2.0

;Item 2   -2.0       1.0

;Item 3    0.0       2.0

;Item 4    1.0       3.0

;Item 5    2.0       3.0

;---------------------------

 

Item1 = 11 ; observations start in column 11

NI=5   ; 5 items

Name1 = 1 ; person label in column 1

CODES = 012  ; valid data values

ISGROUPS = 0  ; partial-credit model

 

IAFILE=*

1-5 0     ; item difficulties for all items set at 0

*

 

SAFILE=*

1 0 0       ; this is a placeholder for data code 0 for item 1

1 1 -3.0        ; value from ISFILE= I+THRESH column

1 2 -2.0

2 0 0 

2 1 -2.0

2 2 1.0

3 0 0

3 1 0.0 

3 2 2.0

4 0 0

4 1 1.0

4 2 3.0

5 0 0

5 1 2.0

5 2 3.0

*

 

&END

END LABELS

Person 1  22111

Person 2  21010

 


 

Equating with Partial Credit (PCM) items

 

Question: My data has three time-points, and I want to compare the item difficulty hierarchies across time-points, but the PCM thresholds are different at each time-point. What should I do?

 

Answer: Equating with PCM is always problematic. The threshold estimates are highly influenced by idiosyncrasies in the local dataset. Since Rasch findings are usually based on person estimates, and these are based on the item ICCs (expected scores on the items), then it really makes more sense to compare the ICCs than the thresholds.

 

Ben Wright.'s recommendation was to analyze all the data together to obtain the best compromise for the thresholds, and then anchor the thresholds at those values for the analysis of each time-point separately.  See www.rasch.org/rmt/rmt101f.htm stage II. Accordingly the SFILE=sf.txt from the joint analysis becomes the SAFILE=sf.txt for all the separate  time-point analyses.

 

Another approach is to treat each PCM item essentially as a dichotomy by choosing one Andrich threshold as the "pivot" threshold for each item, and then anchoring this threshold of each item at 0. Then the item difficulties are forced to conform with this threshold value at all time-points. Accordingly SAFILE= 0 for the pivot thresholds of all the items (other thresholds are not anchored) in all the time-point analyses. For example, in all the analyses:

 

ISGROUPS=0 ; Partial Credit Model

SAFILE=*

1 3 0 ; for item 1, anchor thresholds between categories 2 and 3 at 0

2 2 0 ; for item 2, anchor thresholds between categories 1 and 2 at 0

3 2 0 ; for item 3, anchor thresholds between categories 1 and 2 at 0

.....

*


Help for Winsteps Rasch Measurement and Rasch Analysis Software: www.winsteps.com. Author: John Michael Linacre

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Minifac download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download

Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou Journal of Applied Measurement
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
As an Amazon Associate I earn from qualifying purchases. This does not change what you pay.

facebook Forum: Rasch Measurement Forum to discuss any Rasch-related topic

To receive News Emails about Winsteps and Facets by subscribing to the Winsteps.com email list,
enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Winsteps.com
The Winsteps.com email list is only used to email information about Winsteps, Facets and associated Rasch Measurement activities. Your email address is not shared with third-parties. Every email sent from the list includes the option to unsubscribe.

Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com


State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials


 

 
Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

 

Our current URL is www.winsteps.com

Winsteps® is a registered trademark