Bonferroni - Multiple t-tests
The exact statement of your null hypothesis determines whether a Bonferroni correction applies. If you have a list of t-tests and a significant result for even one of those t-tests rejects the null-hypothesis, then Bonferroni correction (or similar).
Let's assume your hypothesis is "this instrument does not exhibit DIF", and you are going to test the hypothesis by looking at the statistical significance probabilities reported for each t-test in a list of t-tests. Then, by chance, we would expect 1 out of every 20 or so t-tests to report p≤.05. So, if there are more than 20 t-tests in the list, then p≤.05 for an individual t-test is a meaningless significance. In fact, if we don't see at least one p≤.05, we may be surprised!
The Bonferroni correction says, "if any of the t-tests in the list has p≤.05/(number of t-tests in the list), then the hypothesis is rejected".
What is important is the number of tests, not how many of the are reported to have p≤.05.
If you wish to make a Bonferroni multiple-significance-test correction, compare the reported significance probability with your chosen significance level, e.g., .05, divided by the number of t-tests in the Table. According to Bonferroni, if you are testing the null hypothesis at the p≤.05 level: "There is no effect in this test." Then the most significant effect must be p≤.05 / (number of item DIF contrasts) for the null hypothesis of no-effect to be rejected.
Question: Winsteps Tables report many t-tests. Should Bonferroni adjustments for multiple comparisons be made?
Reply: It depends on how you are conducting the t-tests. For instance, in Table 30.1. If your hypothesis (before examining any data) is "there is no DIF for this CLASS in comparison to that CLASS on this item", then the reported probabilities are correct.
If you have 20 items, then one is expected to fail the p ≤ .05 criterion. So if your hypothesis (before examining any data) is "there is no DIF in this set of items for any CLASS", then adjust individual t-test probabilities accordingly.
In general, we do not consider the rejection of a hypothesis test to be "substantively significant", unless it is both very unlikely (i.e., statistically significant) and reflects a discrepancy large enough to matter (i.e., to change some decision). If so, even if there is only one such result in a large data set, we may want to take action. This is much like sitting on the proverbial needle in a haystack. We take action to remove the needle from the haystack, even though statistical theory says, "given a big enough haystack, there will probably always be a needle in it somewhere."
A strict Bonferroni correction for n multiple significance tests at joint level a is a /n for each single test. This accepts or rejects the entire set of multiple tests. In an example of a 100 item test with 20 bad items (.005 < p < .01), the threshold values for cut-off with p ≤ .05 would be: p ≤ .0.0005, so that the entire set of items is accepted.
Benjamini and Hochberg (1995) suggest that an incremental application of Bonferroni correction overcomes some of its drawbacks. Here is their procedure:
i) Perform the n single significance tests.
ii) Number them in ascending order by probability P(i) where i=1,n in order.
iii) Identify k, the largest value of i for which P(i) ≤ α * i/n where α = .05 or α = .01
iv) Reject the null hypothesis for i = 1, k
In an example of a 100 item test with 20 bad items (with .005 < p < .01), the threshold values for cut-off with α = .05 would be: 0.0005 for the 1st item, .005 for the 10th item, .01 for the 20th item, .015 for the 30th item. So that k would be at least 20 and perhaps more. All 20 bad items have been flagged for rejection.
Benjamini Y. & Hochberg Y. (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society B, 57,1, 289-300.
Example of whether to Bonferroni or not ...
Hypothesis 1: There is no DIF between men and women on item 1.
This is tested for item 1 in Table 30.1
or there is no DPF between "addition items" and "subtraction items" for George
This is tested for George in Table 31.1
Hypothesis 2: There is no DIF between men and women on the 8 items of this test.
Look at the 8 pair-wise DIF tests in Table 30.1
Choose the smallest p-value = p
Divide it by 8 = p/8
If (p/8) ≤.05 then reject hypothesis 2.
Or there is no DPF between "addition items" and "subtractions items" across the 1000 persons in the sample. - Bonferroni applied to Table 31.1.
Question: "Does this mean that if one or a few t-tests turn out significant, you should reject the whole set of null hypotheses and you can not tell which items that are DIF?"
Answer: You are combining two different hypotheses. Either you want to test the whole set (hypothesis 2) or individual items (hypothesis 1). In practice, we want to test individual items. So Bonferroni does not apply.
Let's contrast items (each of which is carefully and individually constructed) against a random sample from the population.
We might ask: "Is there Differential Person Functioning by this sample across these two types of items?" (Hypothesis 2 - Bonferroni) because we are not interested to (and probably cannot) investigate individuals.
But we (and the lawyers) are always asking "is there Differential Item Functioning on this particular item for men and women?" (Hypothesis 1 - not Bonferroni).
Help for Winsteps Rasch Measurement Software: www.winsteps.com. Author: John Michael Linacre
For more information, contact email@example.com or use the comment form below.
|Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download
|State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied|
Rasch, Winsteps, Facets online Tutorials
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Winsteps Tutorials||Facets Tutorials||Rasch Discussion Groups|
|Coming Rasch-related Events|
|Jan. 25-26, 2017, Wed.-Thurs.||In-person workshop: Measurement with the Rasch Model (M. Pampaka, J. Williams, Winsteps), Manchester, UK, website|
|Feb. 27 - June 24, 2017, Mon.-Sat.||On-line: Advanced course in Rasch Measurement Theory (EDUC5606), Website|
|March 31, 2017, Fri.||Conference: 11th UK Rasch Day, Warwick, UK, www.rasch.org.uk|
|April 2-3, 2017, Sun.-Mon.||Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information|
|April 26-30, 2017, Wed.-Sun.||NCME, San Antonio, TX, www.ncme.org|
|April 27 - May 1, 2017, Thur.-Mon.||AERA, San Antonio, TX, www.aera.net|
|May 26 - June 23, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 30 - July 29, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|