Testing effect size homogeneity is an essential part when conducting a meta-analysis. Comparative studies of effect size homogeneity tests in case of binary outcomes are found in the literature, but no test has come out as an absolute winner. A alternative approach would be to carry out multiple effect size homogeneity tests on the same meta-analysis and combine the resulting dependent p-values. In this article we applied the correlated Lancaster method for dependent statistical tests. To investigate the proposed approach’s performance, we applied eight different effect size homogeneity tests on a case study and on simulated datasets, and combined the resulting p-values. The proposed method has similar performance to that of tests based on the score function in the presence of a effect size when the number of studies is small, but outperforms these tests as the number of studies increases. However, the method’s performance is sensitive to the correlation coefficient value assumed between dependent tests, and only performs well when this value is high. More research is needed to investigate the method’s assumptions on correlation in case of effect size homogeneity tests, and to study the method’s performance in meta-analysis of continuous outcomes.
Keywords: Meta-analysis, two*two contingency tables, effect size, homogeneity test, dependent p-values