Open Access

Is conventionally calculated anchor-based minimum clinically important difference value catches the real clinical increment? Determining the situations that make the answer “no” by a simulation study


Cite

The aim of this study was to examine the accuracy of conventionally used method-optimal cutoff of Receiver Operating Characteristic (ROC) curve- to determine the minimum clinically important difference (MCID), which is the estimator of responsiveness for scales, by a simulation study. The baseline person parameters were firstly generated and, by using these values, two gold standard groups were constructed as “improved” and “non-improved” after the treatment. Five point-likert response patterns were obtained for 20 items in each group, representing pre- and post-treatment responses of individuals. The mean change score between post treatment and baseline scores for the improved group was considered as real MCID (MCIDR), after baseline and post-treatment total scores were calculated from response patterns. The cut-off for change score specified by ROC analysis, which best discriminates between improved group and not improved group, MCIDROC, was compared to MCIDR. The scenarios of simulation were consisted of sample size and distribution of total scores for improved group. The data were generated for each of 40 scenarios with 1000 MCMC repeats. It was observed that the MCIDR and MCIDROC were not so affected by sample size. However, MCIDROC overestimated the MCIDR values in all scenarios. Briefly, the cut-off points obtained by ROC analysis found to be greater than the real MCID values. Therefore, alternative methods are required to calculate MCID.