Glossary 
Top Up Down
A A 
Glossary  Dictionary  Lexicon of Rasch Measurement Terminology 

Glosario Español www.rasch.org/rmt/glosario.htm 

Ability 
the level of successful performance of the objects of measurement (persons) on the latent variable. Each person's location on the unidimensional variable measured in "additive Rasch units", usually logits. 
Additive scale 
Scale of measurement in which the units have the properties of simple addition, so that "one more unit = the same amount extra regardless of the amount you already have". Typical measuring devices such as tape measures and thermometers have additive scales. Rasch additive scales are usually delineated in logits. 
Agent of Measurement 
the tool (items, questions, etc.) used to define a latent variable, and to position objects of measurement (persons etc.) along that variable. 
Analytic rating 
a rating of a specific aspect of a performance (cf. Holistic rating) 
Anchor 
the process of using anchor values to insure that different analyses produce directly comparable results. 
Anchor Value 
a preset logit value assigned to a particular object, agent or step to be used as a reference value for determining the measurements or calibrations of other objects, agents or steps. 
Anchor Table 
the table of Anchor Values used during Rasch analysis of an Input Grid and so included in the Results Table produced. The Anchor Table has the same format as the Results Table. 
Anchoring 
the process of using anchor values to insure that different analyses produce directly comparable results. 
Best Test Design 
Wright, B.D. & Stone, M.H., Best Test Design: Rasch Measurement. Chicago: Mesa Press, 1979 
Bias 
A change in logit values based on the particular agents or objects measured. 
BOTTOM 
The value shown in the Results Table for an agent on which all objects were successful, (so it was of bottom difficulty), or for an object which had no success on any agent (so it was of bottom ability) 
Bottom Category 
the response category at which no level of successful performance has been manifested. 
Calibration 
a difficulty measure in logits used to position the agents of measurement (usually test items) along the latent variable. 
CAT Test 
ComputerAdaptive Test. A test administered by computer in which the display of the next item depends on the response to the previous item. 
Categories CATS 
qualitative levels of performance on an observational or response format, e.g., a rating scale. 
Cell 
Location of data in the spreadsheet, given by a column letter designation and row number designation e.g. B7 
Classical Test Theory 
Item analysis in which the raw scores are treated as additive numbers. 
Common Scale 
a scale of measurement on which all agents and objects can be represented. 
Column 
Vertical line of data in the Spreadsheet data, usually representing in an Input Grid all responses to a particular item, or in a Results Table, all statistics measuring the same attribute of agents or objects. 
Comment 
A semicolon ; followed by text. This is ignored by Winsteps and Facets. 
Complete data 
Data in which every persons responds to every item. It makes a completelyfilled rectangular data matrix. There are no missing data. 
ComputerAdaptive Test 
CAT Test. A test administered by computer in which the display of the next item depends on the response to the previous item. 
Construct validity 
The correlation between the item difficulties and the latent trait as intended by the test constructor. "Is the test measuring what it is intended to measure?" 
Content 
the subject area evoked and defined by an agent. 
Continuation line 
A separate line of text which Winsteps analyses as appended to the end of the previous line. These are shown with "+". 
Contrast component 
In the principal components analysis of residuals, a principal component (factor) which is interpreted by contrasting the items (or persons) with opposite loadings (correlations) on the component. 
Control file 
A DOStext file on your disk drive containing the Winsteps control variables. 
Control variable 
In Winsteps, "control variable = value", is an instruction for controlling the computer program, e.g., "ITEM1 = 4". 
Convergence 
the point at which further improvement of the item and person estimates makes no useful difference in the results. Rasch calculation ends at this point. 
Correlation 
the relationship between two variables 
CTT 
Classical Test Theory 
Data file 
Winsteps: file containing the person labels and the responses to the items. It is part of the Control file if DATA= or MFORMS= are not used. 
Demographics 
Information about the person included the person label, e.g., "F" for female or "M" for male 
Deterministic 
Exactly predictable without any uncertainty. This contrasts with Probabilistic. 
Dichotomous Response 
a response format of two categories such as correctincorrect, yesno, agreedisagree. 
DIF Differential item functioning 
Change of item difficulty depending on which person classificationgroup is responding to the item, also called "item bias" 
Difficulty 
the level of resistance to successful performance of the agents of measurement on the latent variable. An item with high difficulty has a low marginal score. The Rasch item difficulty is the location on the unidimensional latent variable, measured in additive Rasch units, usually logits. 
Dimension 
a latent variable which is influencing the data values. 
Discrepancy 
one or more unexpected responses. 
Distractor 
Incorrect answer to a multiplechoice question, which is intended to distract the examinee away from the correct option. Sometimes all the options, correct and incorrect, are called "distractors". 
Disturbance 
one or more unexpected responses. 
Diverging 
the estimated calibrations at the end of an iteration are further from convergence than at the end of the previous iteration. 
Easiness 
the level of susceptibility to successful performance of the agents of measurement on the latent variable. An item with high easiness has a high marginal score. 
Eigenvalue 
The value of a characteristic root of a matrix, the numerical "size" of the matrix 
Element 
Individual in a facet, e.g., a person, an item, a judge, a task, which participates in producing an observation. 
Empirical 
Based on observation or experiment 
Empirical data 
data derived from observation or experimentation 
END LABELS END NAMES 
winsteps: the end of the list of item identifying labels. This is usually followed by the data. 
Entry number 
Sequence number of the person or item in the dataset. Person: Entry number 1 is the top row of the responselevel data. Item: Entry number 1 is the lefthand column of itemresponse data. 
Equating 
Putting the measures from two tests in the same frame of reference 
Estimate 
A value obtained from the data. It is intended to approximate the exactly true, but unknowable value. 
EXP Expected value 
Value predicted for this situation based on the measures 
Expected Response 
the predicted response by an object to an agent, according to the Rasch model analysis. 
EXP() Exponential 
Mathematical function used in estimating the Rasch measures 
Exponential form 
The Rasch model written in terms of exponentials, the form most convenient for computing response probabilities. 
Extreme item 
An item with an extreme score. Either everyone in the sample scored in the top category on the item, or everyone scored in the bottom category. An extreme measure is estimated for this item, and it fits the Rasch model perfectly, so it is omitted from fit reports. 
Extreme person 
A person with an extreme score. This person scored in the top category on the every item, or in the bottom category on every item. An extreme measure is estimated for this person, who fits the Rasch model perfectly, so is omitted from fit reports. 
Facet 
The components conceptualized to combine to produce the data, e.g., persons, items, judges, tasks. 
Fit Statistic 
a summary of the discrepancies between what is observed and what we expect to observe. 
Focal group 
The person classificationgroup which is the focus of a differentialitemfunctioning investigation 
Frame of reference 
The measurement system within which measures are directly comparable 
Fundamental Measurement 
1. Measurement which is not derived from other measurements. 2. Measurement which is produced by an additive (or equivalent) measurement operation. 
Guttman 
Louis Guttman (19161987) organized data into Scalograms intending that the observed response by any person to any items could be predicted deterministically from its position in the Scalogram. 
Guttman pattern 
Success on all the easier items. Failure on all the more difficulty items. 
Heading 
an identifier or title for use on tables, maps and plots. 
Holistic rating 
One rating which captures all aspects of the performance (cf. Analytic rating) 
Hypothesis test 
Fit statistics report on a hypothesis test. Usually the null hypothesis to be tested is something like "the data fit the model", "the means are the same", "these is no DIF". The null hypothesis is rejected if the results of the fit test are significant (p≤.05) or highly significant (p≤.01). The opposite of the null hypothesis is the alternate hypothesis. 
Imputed data 
Data generated by the analyst or assumed by the analytical process instead of being observed. 
Independent 
Not dependent on which particular agents and objects are included in the analysis. Rasch analysis is independent of agent or object population as long as the measures are used to compare objects or agents which are of a reasonably similar nature. 
Infit 
an informationweighted or inliersensitive fit statistic that focuses on the overall performance of an item or person, i.e., the informationweighted average of the squared standardized deviation of observed performance from expected performance. The statistic plotted and tabled by Rasch is this mean square normalized. 
Interval scale 
Scale of measurement on which equal intervals represent equal amounts of the variable being measured. Rasch analysis constructs interval scales with additive properties. 
Item 
agent of measurement (prompt, probe, "rating scale"), not necessarily a test question, e.g., a product rating. The items define the intended latent trait. 
Item bank 
Database of items including the item text, scoring key, difficulty measure and relevant statistics, used for test construction or CAT tests 
Iteration 
one run through the data by the Rasch calculation program, done to improve estimates by minimizing residuals. 
Knox Cube Test 
a tapping pattern test requiring the application of visual attention and short term memory. 
Latent Trait 
The idea of what we want to measure. A latent trait is defined by the items or agents of measurement used to elicit its manifestations or responses. 
Link 
Relating the measures derived from one test with those from another test, so that the measures can be directly compared. 
LN() Logarithm 
Natural or Napierian logarithm. A logarithm to the base e, where e = 2.718... This contrasts with logarithms to the base 10. 
Local origin 
Zero point we have selected for measurement, such as sealevel for measuring mountains, or freezingpoint for Celsius temperature. The zero point is chosen for convenience (similarly to a "settingout point"). In Rasch measurement, it is often the average difficulty of the items. 
Logodds 
The natural logarithm of the ratio of two probabilities (their odds). 
Logit 
"Logodds unit": the unit of measure used by Rasch for calibrating items and measuring persons on the latent variable. A logarithmic transformation of the ratio of the probabilities of a correct and incorrect response, or of the probabilities of adjacent categories on a rating scale. 
Logistic curvefitting 
an estimation method in which the improved value of an estimate is obtained by incrementing along a logistic ogive from its current value, based on the size of the current rawscore residual. 
Logistic ogive 
the relationship between additive measures and the probabilities of dichotomous outcomes. 
Logitlinear 
The Rasch model written in terms of logodds, so that the measures are seen to form a linear, additive combination 
Map 
a bar chart showing the frequency and spread of agents and objects along the latent variable. 
Matrix 
a rectangle of responses with rows (or columns) defined by objects and columns (or rows) defined by agents. 
MCQ MultipleChoice Question. 
This is an item format often used in educational testing where the examinee selects the letter corresponding to the answer. 
Meansquare MnSq 
A meansquare fit statistic is a chisquare statistic divided by its degrees of freedom (d.f.). Its expectation is 1.0. Values below 1.0 indicate that the data are too predictable = overly predictable = overfit of the data to the model. Values above 1.0 indicate the data too unpredictable = underfit of the data to the model 
Measure Measurement 
the location (usually in logits) on the latent variable. The Rasch measure for persons is the person ability. The Rasch measure for items is the item difficulty. 
Menu bar 
This is at the top of a program's window, and shows a list of standard program operations 
Misfit 
Any difference between the data the model predictions. Misfit usually refers to "underfit". The data are too unpredictable. 
Missing data 
Data which are not responses to the items. They can be items which the examinees did not answer (usually score as "wrong") or items which were not administered to the examinee (usually ignored in the analysis). 
Model 
Mathematical conceptualization of a relationship 
Muted 
Overfit to the Rasch model. The data are too predictable. The opposite is underfit, excessive noise. 
NewtonRaphson iteration 
A general method for finding the solution of nonlinear equations 
Noise 
1. Randomness in the data predicted by the Rasch model. 2. Underfit: excessive unpredictability in the data, perhaps due to excessive randomness or multidimensionality. 
Normal 
a random distribution, graphically represented as a "bell" curve which has a mean value of 0 and a standard deviation of 1. 
Normalized 
1. the transformation of the actual statistics obtained so that they are theoretically part of a unitnormal distribution. "Normalized" means "transformed into a unitnormal distribution". We do this so we can interpret the values as "unitnormal deviates", the xvalues of the normal distribution. Important ones are ±1.96, the points on the xaxis for which 5% of the distribution is outside the points, and 95% of the distribution is between the points. 2. linearly adjusting the values so they sum to a predetermined amount. For instance, probabilities always sum to 1.0. 
Not administered 
an item which the person does not see. For instance, all the items in an item bank which are not part of a computeradaptive test. 
Object of Measurement 
person, product, site, to be measured or positioned along the latent variable. 
OBS Observed 
Value derived from the data 
Observation Observed Response 
the actual response by an object to an agent. 
Odds ratio 
Ratio of two probabilities, e.g., "odds against" is the ratio of the probability of losing (or not happening) to the probability of winning (or happening). 
Outfit 
an outliersensitive fit statistic that picks up rare events that have occurred in an unexpected way. It is the average of the squared standardized deviations of the observed performance from the expected performance. Rasch plots and tables use the normalized unweighted mean squares so that the graphs are symmetrically centered on zero. 
Outliers 
unexpected responses usually produced by agents and objects far from one another in location along the latent variable. 
Overfit 
The data are too predictable. There is not enough randomness in the data. This may be caused by dependency or other constraints. 
Perfect score 
Every response "correct" or the maximum possible score. Every observed response in the highest category. 
Person 
the object of measurement, not necessarily human, e.g., a product. 
Plot 
an xy graph used by Rasch to show the fit statistics for agents and objects. 
Point Labels 
the placing on plots of the identifier for each point next to the point as it is displayed. 
Pointmeasure correlation PTMEASURE, PTMEA 
The correlation between the observations in the data and the measures of the items or persons producing them. 
Poisson Counting 
a method of scoring tests based on the number of occurrences or nonoccurrences of an event, e.g. spelling mistakes in a piece of dictation. 
Polarity 
The direction of the responses on the latent variable. If higher responses correspond to more of the latent variable, then the polarity is positive. Otherwise the polarity is negative. 
Polytomous response 
responses in more than two ordered categories, such as Likert ratingscales. 
Population 
Every person (or every item) with the characteristics we are looking for. 
Predictive validity 
This is the amount of agreement between results obtained by the evaluated instrument and results obtained from more directly, e.g., the correlation between success level on a test of carpentry skill and success level making furniture for customers. "Do the person measures correspond to more and less of what we are looking for?" 
Probabilistic 
Predictable to some level of probability, not exactly. This contrasts with Deterministic. 
Process 
the psychological quality, i.e., the ability, skill, attitude, etc., being measured by an item. 
PROX 
the "Normal Approximation" estimation algorithm (Cohen, 1979). used to obtain initial estimates for the iterative estimation process. 
Rack 
Placing the responses to two tests in adjacent columns for each person, as though the items were being placed on a rack, c.f., stack. 
Rasch, Georg 
Danish Mathematician (19061980), who first propounded the application of the statistical approach used by Rasch. 
Rasch measure 
linear, additive value on an additive scale representing the latent variable 
Rasch Model 
a mathematical formula for the relationship between the probability of success (P) and the difference between an individual's ability (B) and an item's difficulty (D). P=exp(BD)/(1+exp(BD)) or log [P/(1P)] = B  D 
RaschAndrich Threshold 
Step calibration. Location on the latent variable (relative to the center of the rating scale) where adjacent categories are equally probable. 
Rating Scale 
A format for observing responses wherein the categories increase in the level of the variable they define, and this increase is uniform for all agents of measurement. 
Rating Scale Analysis 
Wright, B.D. & Masters, G.N., Rating Scale Analysis: Rasch Measurement. Chicago: Mesa Press, 1982. 
Raw score 
the marginal score; the sum of the scored observations for a person, item or other element. 
Reference group 
The person classificationgroup which provides the baseline item difficulty in a differentialitemfunctioning investigation 
Reliability 
Reliability (reproducibility) = True Variance / Observed Variance (Spearman, 1904, etc.). It is the ratio of sample or test variance, corrected for estimation error, to the total variance observed. 
Residuals 
the difference between data observed and values expected. 
Response 
The value of an observation or datapoint indicating the degree of success by an object (person) on an agent (item) 
Choosing the same response on every item, such as always selecting option "C" on a multiplechoice test, or always selecting "Agree" on an attitude survey. 

Results Table 
a report of Rasch calculations. 
Rigidity 
when agents, objects and steps are all anchored, this is the logit inconsistency between the anchoring values, and is reported on the Iteration Screen and Results Table. 0 represents no inconsistency. 
Row 
a horizontal line of data on a Spreadsheet, usually used, in the Input Grid, to represent all responses by a particular object. The top row of each spreadsheet is reserved for Rasch control information. 
Ruleofthumb 
A tentative suggestion that is not a requirement nor a scientific formula, but is based on experience and inference from similar situations. Originally, the use of the thumb as a unit of measurement. 
Sample 
the persons (or items) included in this analysis 
Scale 
the quantitative representation of a latent variable. 
Scalogram 
Picture of the data in which the persons (rows) and items (columns) are arranged by marginal raw scores. 
Score points 
the numerical values assigned to responses when summed to produce a score for an agent or object. 
Scoring key 
The list of correct responses to multiplechoice (MCQ) items. 
Scree plot 
Plot showing the fraction of total variance in the data in each variance component. 
Separation 
the ratio of sample or test standard deviation, corrected for estimation error, to the average estimation error. This is the number of statistically different levels of performance that can be distinguished in a normal distribution with the same "true" S.D. as the current sample. Separation = 2: high measures are statistically different from low measures. 
Specification 
A Winsteps controlvariable and its value, e.g., "Name1=17" 
Stack 
Analyzing the responses of the same person to multiple administrations of the same test as though they were made by separate persons, by "stacking" the person records in one long data file, c.f., "rack" 
The root mean square of the differences between the sample of values and their mean value. In Winsteps, all standard deviations are "population standard deviations" (the sample is the entire population). For the larger "sample standard deviation" (the sample is a random selection from the population), please multiply the Winsteps standard deviation by squareroot (samplesize / (sample size  1)). 

Standard Error 
An estimated quantity which, when added to and subtracted from a logit measure or calibration, gives the least distance required before a difference becomes meaningful. 
Step calibration Step difficulty 
RaschAndrich threshold. Location on the latent variable (relative to the center of the rating scale) where adjacent categories are equally probable. 
Steps 
the transitions between adjacent categories as ordered by the definition of the latent variable. 
Strata 
= (4*Separation+1)/3 This is the number of statistically different levels of performance that can be distinguished in a normal distribution with the same "true" S.D. as the current sample, when the tales of the normal distribution are due to "true" measures, not measurement error. Strata=3: very high, middle, and very low measures can be statistically distinguished. 
Sufficient statistic 
A statistic (a number) which contains all the information in the data from which to estimate the value of a parameter. 
Suffix 
The letters added to a file name which specify the file format, e.g., ".txt" means "text file". If you do not see the suffix letters, instruct Windows to display them. See the Lesson 1 Appendix. 
Table 
Lists of words and numbers, arrange in columns, usually surrounded by "". 
Targeted 
when the item difficulty is close to the person ability, so that he probability of success on a dichotomous item is near to 50%, or the expected rating is near to the center of the rating scale. 
Targeting 
Choosing items with difficulty equal to the person ability. 
Task bar 
This shows the Windows programs at the bottom of your computer screen 
Template 
a specially formatted input file. 
Test length 
The number of items in the test 
Test reliability 
The reliability (reproducibility) of the measure (or raw score) hierarchy of sample like this sample for this test. The reported reliability is an estimate of (true variance)/(observed variance), as also are Cronbach Alpha and KR20. 
TOP 
The value shown in the Results Table for an agent on which no objects were successful, (so it was of top difficulty), or for an object which succeeded on every agent (so it was of top ability) 
Top Category 
the response category at which maximum performance is manifested. 
UCON 
the unconditional (or "joint" JMLE) maximum likelihood estimation formula, used by some Rasch programs for the second part of the iteration process. 
Underfit 
The data are too unpredictable. The data underfit the model. This may be because of excessive guessing, or contradictory dimensions in the data. 
UNSURE 
Rasch was unable to calibrate this data and treated it as missing. 
Unweighted 
the situation in which all residuals are given equal significance in fit analysis, regardless of the amount of the information contained in them. 
Variable 
a quantity or quality which can change its value 
Weighted 
the adjustment of a residual for fit analysis, according to the amount of information contained in it. 
Zero score 
Every response "incorrect" or the minimum possible score. Every observed response in the lowest category. 
ZSTD 
Probability of a meansquare statistic expressed as a zstatistic, i.e., a unitnormal deviate. For p≤.05 (doublesided), ZSTD>1.96. 
&END 
The end of the list of Winsteps control variables 
&INST 
The beginning of the list of Winsteps control variables. This is not necessary. 
Help for Winsteps Rasch Measurement Software: www.winsteps.com. Author: John Michael Linacre
Facets Rasch measurement software $149. Winsteps Rasch measurement software $149. 

Stateoftheart : singleuser and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials 

Forum  Rasch Measurement Forum to discuss any Raschrelated topic 


Rasch Publications  

Rasch Measurement Transactions (free, online)  Rasch Measurement research papers (free, online)  Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch 
Applying the Rasch Model 2nd. Ed., Bond & Fox (Winsteps)  Best Test Design, Wright & Stone  Rating Scale Analysis, Wright & Masters 
Rasch Analysis in the Human Sciences, W. Boone, J. Staver, M. Yale  Introduction to ManyFacet Rasch Measurement, Thomas Eckes (Facets)  Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. (Facets) 
Statistical Analyses for Language Testers, Rita Green (Winsteps, Facets)  Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar  Journal of Applied Measurement 
Winsteps Tutorials  Facets Tutorials  Rasch Discussion Groups 
Coming Raschrelated Events  

Sept. 30, 2014, Tues.  Submission deadline: 6th Rasch Conference: Sixth International Conference on Probabilistic Models for Measurement in Education, Psychology, Social Science and Health, Cape Town, South Africa www.rasch.co.za/conference.php 
Oct. 3, 2014, Fri.  Submission deadline: IOMC 2015: International Outcomes Measurement Conference, Chicago IL www.jampress.org 
Oct. 810, 2014, Wed.Fri.  IACAT Conference: International Association of Computerized Adaptive Testing, Princeton, NJ, iacat.org/conference 
Oct. 17  Nov. 14, 2014, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Nov. 14, 2014, Fri.  Inperson workshop: IX Workshop on Rasch Models in Business Administration, Tenerife, Canary Islands, Spain, www.institutos.ull.es/viewcontent/institutos/iude/46416/es 
Dec. 35, 2014, Wed.Fri.  Inperson workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
Jan. 2  Jan. 30, 2015, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Jan. 1214, 2015, Mon.Wed.  6th Rasch Conference: Sixth International Conference on Probabilistic Models for Measurement in Education, Psychology, Social Science and Health, Cape Town, South Africa www.rasch.co.za/conference.php 
March 1113, 2015, Wed.Fri.  Inperson workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
March 20, 2015, Fri.  UK Rasch User Group Meeting, London, United Kingdom, www.rasch.org.uk 
March 2627, 2015, Thur.Fri.  Inperson workshop: Introduction to Rasch Measurement with Winsteps (W. Boone), Cincinnati, raschmeasurementanalysis.com 
April 1620, 2015, Thurs.Mon.  AERA Annual Meeting, Chicago IL www.aera.net 
April 2122, 2015, Tues.Wed.  IOMC 2015: International Outcomes Measurement Conference, Chicago IL www.jampress.org 
May 1315, 2015, Wed.Fri.  Inperson workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
May 1820, 2015, Mon.Wed.  Inperson workshop: Intermediate Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
Sept. 911, 2015, Wed.Fri.  Inperson workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
Sept. 1416, 2015, Mon.Wed.  Inperson workshop: Intermediate Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
Sept. 1718, 2015, Thur.Fri.  Inperson workshop: Advanced Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
Dec. 24, 2015, Wed.Fri.  Inperson workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric 
The HTML to add "Coming Raschrelated Events" to your webpage is: <script type="text/javascript" src="http://www.rasch.org/events.txt"></script> 