﻿ Paired comparisons of objects

# Paired comparisons of objects

Paired comparisons are simple in Facets.

Example 1: Paired comparison of objects by persons. In each pairing, one object wins.

Facet 1 is the objects to be compared. Each object is an element in the facet

Facet 2 is the persons doing the comparing. Each person is an element in the facet. This facet is a "dummy" facet. It is not used for measurement. It is used for fit analysis and interactions only.

So, here is what the Facets specification and data file look like:

Facets= 3 ; each observation has 3 elements in the data, 2 objects + 1 person

Entered= 1, 1, 2 ; the first two elements are for facet 1, the third element is for facet 2

Models= ?, -?,?,D ; the observation is "element of facet 1 - element of facet 1 + element of facet 2" produces a dichotomous 0/1 observation

Labels=

1, Objects ; the object facet

1=A

2=B

3=C

....

*

2, Persons, D ; the person facet: this is a Dummy facet. It is ignored for estimation

4=Mary

5=George

.....

*

Data=

1,2,4,1 ; Object A is compared with Object B by Mary. Object A wins

2,3,5,0 ; Object B is compared with Object C by George. Object B loses

Example 2: Paired comparison of objects by persons. In each pairing, one object wins and one object loses, or they are tied, draw, equal.

Score: 2=Win 1=Tie 0=Loss.

Facet 1 is the objects to be compared. Each object is an element in the facet

Facet 2 is the persons doing the comparing. Each person is an element in the facet. This facet is a "dummy" facet. It is not used for measurement. It is used for fit analysis and interactions only.

So, here is what the Facets specification and data file look like:

Facets= 3 ; each observation has 3 elements in the data, 2 objects + 1 person

Entered= 1, 1, 2 ; the first two elements are for facet 1, the third element is for facet 2

Models= ?, -?,?,R2, 0.5 ; the observation is "element of facet 1 - element of facet 1 + element of facet 2" produces a polymous 0/1/2 observation that is weighted 0.5 because each observation is twice in the data file.

Labels=

1, Objects ; the object facet

1=A

2=B

3=C

....

*

2, Persons, D ; the person facet: this is a Dummy facet. It is ignored for estimation

4=Mary

5=George

.....

*

Data=

; each observation twice:

1,2,4,2 ; Object A is compared with Object B by Mary. Object A wins

2,1,4,0 ; Object B is compared with Object A by Mary. Object B loses

2,3,5,0 ; Object B is compared with Object C by George. Object B loses

3,2,5,2 ; Object C is compared with Object B by George. Object C wins

1,2,5,1 ; Object A is compared with Object B by George. Object A ties

2,1,5,1 ; Object B is compared with Object A by Mary. Object A ties

Example 3.

Example 4. Flavor Strength of Gels

Bayesian imputation for unrealistically huge logit ranges or inestimable elements

A frequently-encountered problem in the analysis of paired-comparison data is an almost Guttman ordering of the pairings. This can lead to unrealistically huge logit ranges for the estimates of the elements or inestimable elements.

To solve this problem, we apply a little Bayesian logic. We know that the range of paired performances is not exceedingly wide, and we can also easily imagine a performance better than any of those being paired, and also a performance worse than any of the those being paired. Let's hypothesize that a reasonable logit distance between those two hypothetical performances is, say, 20 logits.

http://www.rasch.org/rmt/rmt151w.htm is a parallel situation for sports teams.

1.Hypothesize a "best"performance against which every other  performance is worse. Anchor it at 10 logits.

2.Hypothesize a "worst" performance against which every other performance is better. Anchor it at -10 logits

3.Hypothesize a dummy judge who compares the best and worst performances against all the other performances.

4.Include these dummy observations in the analysis.

5.Analyze the actual observations + the dummy observations. The analysis should make sense, and the logit range of the performances will be about 20 logits. For reporting, we don't want the dummy material, so we write an Anchorfile= from this analysis.

6.We then use the Anchorfile as the Facets specification file, commenting out the "best" and "worst" performance elements and the dummy judge. We analyze only the actual observations. All the elements are anchored at their estimates from the actual+dummy analysis. In this anchored analysis, the "displacements" indicate the impact of the dummy data on the estimates.

7.If you perceive that the logit range of 20 logits is too big or too small, please adjust the "best" and "worst" anchored values.

Help for Facets Rasch Measurement Software: www.winsteps.com Author: John Michael Linacre.

 Forum Rasch Measurement Forum to discuss any Rasch-related topic

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments, George Engelhard, Jr. & Stefanie Wind Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez
Winsteps Tutorials Facets Tutorials Rasch Discussion Groups

Coming Rasch-related Events
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
Jan. 22-24, 2018, Mon-Wed. In-person workshop: Rasch Measurement for Everybody en español (A. Tristan, Winsteps), San Luis Potosi, Mexico. www.ieia.com.mx
April 10-12, 2018, Tues.-Thurs. Rasch Conference: IOMW, New York, NY, www.iomw.org
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 22 - 24, 2018, Tues.-Thur. EALTA 2018 pre-conference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank Weiss-Motz), https://ealta2018.testdaf.de
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 27 - 29, 2018, Wed.-Fri. Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France., https://measurement2018.sciencesconf.org
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 25 - July 27, 2018, Wed.-Fri. Pacific-Rim Objective Measurement Symposium (PROMS), (Preconference workshops July 23-24, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences" www.promsociety.org
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Sept. 3 - 6, 2018, Mon.-Thurs. IMEKO World Congress, Belfast, Northern Ireland www.imeko2018.org
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

Our current URL is www.winsteps.com