IAFILE= item anchor file

The IFILE= from one analysis can be used unedited as the item anchor file, IAFILE=, of another.

 

IAFILE= *file name

file containing details

IAFILE = *

in-line list

IAFILE = $S1W1

field in item label

IAFILE=?

opens a Browser window to find a file containing the details

 

The item parameter values (deltas) can be anchored (fixed) using IAFILE=. Anchoring facilitates equating test forms and building item banks. The items common to two test forms, or in the item bank and also in the current form, can be anchored at their other form or bank calibrations. Then the measures constructed from the current data will be equated to the measures of the other form or bank. Other measures are estimated in the frame of reference defined by the anchor values. The anchored values are imputed (inserted) in place of the estimated-from-the-data values. Mathematically, the anchor values are treated as though they, like the estimated-from-the-data values, are the best available estimates of the true values.

 

Displacements are reported, indicating the differences between the anchored values and the freely estimated values. If these are large, please try changing the setting of ANCESTIM=.

 

For polytomies (rating scales, partial credit), IAFILE= must have SAFILE=. The IFILE= and the SFILE= are really one file. For dichotomies, the SFILE= is uninformative, so it can be ignored. For polytomies, the IFILE= and the SFILE= form a pair, and so do the IAFILE= and the SAFILE=. For polytomies, anchoring with the IAFILE= without the SAFILE= is usually meaningless. The items are not completely anchored. Use IAFILE= and SAFILE= if you need the polytomous item in one analysis to be identical in thresholds and overall difficulty to the same item in another analysis. Use only SAFILE= if you need the polytomous item in one analysis to be identical in thresholds to the same item in another analysis, but the overall item difficulties can differ.

 


How anchoring works:

 

The anchored items together with the unanchored items determine the person measures based on the data. The person measures determine the calibrations of the unanchored items and the displacements of the anchored items. The person measures are adjusted so that the mean displacement of the anchored items is zero.

 

Let's imagine some situations with complete data:

1. The data fit the anchored anchored items exactly. There are no displacements and the unanchored items slot exactly into the hierarchy of the anchored items.  Person measures are the same as in an unanchored analysis, but the mean ability measure is adjusted so that the anchored item displacements are all zero. Unanchored items with the same p-values as the anchored items have the same calibrations.

 

2. All the anchored items happen to have the same item calibration, but have different p-values in the data. The mean ability measure is adjusted so that the mean anchored item displacement is zero. The ability measures are more central than in an unanchored analysis. The calibrations of the unanchored items are more central than in an unanchored analysis, but not the same as anchored items with the same p-values.

 

3. The anchored items have calibrations that are random with respect to the current data. The mean ability measure is adjusted so that the mean anchored item displacement is zero. The ability measures are more central than in an unanchored analysis. The calibrations of the unanchored items are more central than in an unanchored analysis, but not the same as anchored items with the same p-values.

 

4. The anchored items have calibrations that are correlated with the current data, but more extreme than their values in an unanchored analysis.  The mean ability measure is adjusted so that the mean anchored item displacement is zero. The ability measures are more diverse than in an unanchored analysis. The calibrations of the unanchored items are more diverse than in an unanchored analysis, but not the same as anchored items with the same p-values.

 


 

Anchor file format:

 

In order to anchor items, a data file must be created of the following form:

1. Use one line per item (or item range) to be anchored.

2. Type the sequence number of the item in the current analysis, a blank, and the measure-value at which to anchor the item (in logits if UASCALE=1, or in your user-rescaled USCALE= units otherwise). Arithmetical expressions are allowed.

 Further values in each line are ignored. An IFILE= works well as an IAFILE=.

3. If the same item appears more than once, the first anchor value is used. When an IFILE= will be used as an IAFILE=, be sure to output the measures with many decimal places: UDECIMALS=4

 

UIMEAN= and UPMEAN= are ignored when there are anchor values, IAFILE= or PAFILE=

 

Stopping estimation: usually Winsteps estimation converges successfully by itself. If it does not, the ctrl+F stops estimation. If this happens repeatedly for an analysis, then you can explicitly tell Winsteps to do whatever you see when you decide to end estimation. For instance, if your decide to stop estimation when the biggest change to logit estimates is less than .01 logits,

LCONV=.01

CONVERGE=L

With anchor values, which usually mean that sum score residuals will never be zero, this choice makes sense.

 

Examples:

2 3.47 ; anchors item 2 at 3.47 logits (or USCALE= values)

10-13 1.3 ; items 10, 11, 12, 13 are each anchored at 1.3 logits

2 5.2 ; item 2 is already anchored. This item anchoring is ignored

1-50 0 ; all the unanchored items in the range 1-50 are anchored at 0.

 

Anything after ";" is treated as a comment.

 

IAFILE = filename

Item anchor information is in a file containing lines of format

 item entry number       anchor value

 item entry number       anchor value

 

IAFILE=*

Item anchor information is in the control file in the format

 IAFILE=*

 item entry number     anchor value

 item entry number     anchor value

 *

 

IAFILE=$SnnEnn or IAFILE=$SnnWnn or @Field

Item anchor information is in the item labels using the column selection rules. Blanks or non-numeric values indicate no anchor value.

 


 

Anchoring and Extreme items

 

In the original calibration, extreme items are given an estimated finite difficulty. They are not used in fit reporting and person measurement.

 

In the anchored calibration, all items are anchored at their estimated difficulties, including the previously extreme items. All items including previously extreme items are used in fit reporting and person measurement.

 

If the anchor values for the extreme items are not to be used in the anchored analysis, we need to eliminate the extreme items from the anchor file:

 

1. In an interactive run of Winsteps,

1. analyze any dataset

2. Output Files menu

3. IFILE=

4. Select fields and other options

5. Check "flag extremes with ;"

6. "Make default"

7. Cancel your way out of Winsteps

 

The result is that all IFILE= output files will have ; as the first character of extreme score lines These will be treated as comments when processed as anchor files by IAFILE=.

 


 

Example 0: only one item is to be anchored:

Slow method - include in your control file:

USCALE=1        ; anchor value and analysis in logits

CONVERGE= L      ; Convergence decided by logit change

LCONVERGE=.00001 ; Set logit convergence tight because of anchoring

IAFILE = *       ; Item anchor file to preset the difficulty of an item

6 0.25            ; Item 6 exactly at 0.25 logit point.

*

 

Faster method:

1) do a standard unanchored analysis

2) output Table 14 items

3) see the measure for item 6  (for me it is1.30)

4) edit your control file so that UIMEAN = wanted value - current value  = 1.30 - 0.25 = 1.05

5) do the standard unanchored analysis again: item 6 is now 0.25

 

Example 1: The third item is to be anchored at 1.5 logits, and the fourth at 2.3 logits.

1. Create a file named, say, "ANC.FIL"

2. Enter the line "3 1.5" into this file, which means "item 3 in this test is to be fixed at 1.5 logits".

3. Enter a second line "4 2.3" into this file, which means "item 4 in this test is to be fixed at 2.3 logits".

3. Specify, in the control file,

USCALE=1        ; anchor value and analysis in logits

IAFILE=ANC.FIL

CONVERGE=L  ; only logit change is used for convergence

LCONV=0.005  ; logit change too small to appear on any report.

 

or place directly in the control file:

IAFILE=*

3 1.5

4 2.3

*

CONVERGE=L  ; only logit change is used for convergence

LCONV=0.005  ; logit change too small to appear on any report.

 

or in with the item labels:

IAFILE=$S10W4 ; location of anchor value in item label

CONVERGE=L  ; only logit change is used for convergence

LCONV=0.005  ; logit change too small to appear on any report.

&END

Zoo

House   1.5  ; item label and anchor value

Garden  2.3

Park

END LABELS

 

To check: "A" after the measure means "anchored"

 

+----------------------------------------------------------------------------------------+

|ENTRY    RAW                        |   INFIT  |  OUTFIT  |PTMEA|        |              |

|NUMBER  SCORE  COUNT  MEASURE  ERROR|MNSQ  ZSTD|MNSQ  ZSTD|CORR.|DISPLACE| ITEMS        |

|------------------------------------+----------+----------+-----+--------+--------------|

|     3     32     35     1.5A    .05| .80   -.3| .32    .6|  .53|     .40| House        |

 

Example 2: The calibrations from one run are to be used to anchor subsequent runs. The items have the same numbers in both runs. This is convenient for generating tables not previously requested.

1. Perform the calibration run, say,

C:> Winsteps SF.TXT SOMEO.TXT IFILE=ANCHORS.SF TABLES=111

 

2. Perform the anchored runs, say,

C:> Winsteps SF.TXT MOREO.TXT IAFILE=ANCHORS.SF TABLES=0001111

C:> Winsteps SF.TXT CURVESO.TXT IAFILE=ANCHORS.SF CURVES=111

 

Example 3: Score-to-measure Table 20 is to be produced from known item and rating scale structure difficulties.

Specify:

USCALE=values ; scaling of the anchor  values

IAFILE=iafile.txt  ; the item anchor file

SAFILE=safile.txt ; the structure/step anchor file (only for polytomies)

TFILE=*

20   ; the score table

*

CONVERGE=L  ; only logit change is used for convergence

LCONV=0.005  ; logit change too small to appear on any report.

STBIAS=NO ; anchor values do not need estimation bias correction.

The data file comprises two dummy data records, so that every item has a non extreme score, e.g.,

For dichotomies:

CODES = 01

 Record 1: 10101010101

 Record 2: 01010101010

 

For a rating scale from 1 to 5:

CODES = 12345

 Record 1: 15151515151

 Record 2: 51515151515

 

Example 4. Anchoring polytomous items for the Rating Scale Model

 

CODES = 012 ; 3 category Rating Scale Model

IAFILE=*

1 2.37 ; anchor item 1 at 2.37 logits

2 -1.23

*

 

SAFILE=*

0 0 ; the bottom category is always anchored at 0

1 -2.34 ; Andrich threshold (step difficulty) from category 0 to 1

2 2.34 ; Andrich threshold (step difficulty) from category 2 to 3

*

 

Example 5. Anchoring polytomous items for the Partial Credit and Grouped-Items models

 

CODES = 012 ; 3 category Rating Scale Model

ISGROUPS=0

IAFILE=*

1 2.37 ; anchor item 1 at 2.37 logits

2 -1.23

*

 

SAFILE=*

; for item 1, relative to the difficulty of item 1

1 0 0 ; the bottom category is always anchored at 0

1 1 -2.34 ; Andrich threshold (step difficulty) from category 0 to 1

1 2 2.34 ; Andrich threshold (step difficulty) from category 2 to 3

; for item 2, relative to the difficulty of item 2

2 0 0 ; the bottom category is always anchored at 0

2 1 -1.54 ; Andrich threshold (step difficulty) from category 0 to 1

2 2 1.54 ; Andrich threshold (step difficulty) from category 2 to 3

*


Help for Winsteps Rasch Measurement and Rasch Analysis Software: www.winsteps.com. Author: John Michael Linacre

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Minifac download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download

Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
As an Amazon Associate I earn from qualifying purchases. This does not change what you pay.

facebook Forum: Rasch Measurement Forum to discuss any Rasch-related topic

To receive News Emails about Winsteps and Facets by subscribing to the Winsteps.com email list,
enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Winsteps.com
The Winsteps.com email list is only used to email information about Winsteps, Facets and associated Rasch Measurement activities. Your email address is not shared with third-parties. Every email sent from the list includes the option to unsubscribe.

Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com


State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials


 

 
Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 7, 2024, Mon.-Wed. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

 

Our current URL is www.winsteps.com

Winsteps® is a registered trademark