The aim of this study is to investigate the performance of the optimal cut-off methods, which are generally used for the diagnostic tests with the continuous response, for the tests with the ordinal response. Diagnostic accuracy studies examine the ability of a diagnostic test to discriminate between the patients with and without the condition. For diagnostic tests with a continuous response, it is important in practice to calculate the optimal cut-off point that can differentiate patients and healthy individuals. There are many methods proposed in the literature to obtain the optimal cut-point value for continuous test results. The Youden index, the point closest-to-(0, 1) corner in the ROC plane approach, the concordance probability, and the minimum P-value approach are commonly used methods to determine optimal-cut-point. But the researches examining the performance of these methods in the setting of the ordinal response tests are lacking in the literature. So, we compared the mentioned optimal cut-off methods for the ordinal response data by the way of simulation design by considering the sample size and the balance of groups as simulation conditions. The sample sizes of the diseased and non-diseased group were set (50, 50), (100, 100), and (200, 200) for balanced design and (50, 100), (50, 150) and (50, 200) for unbalanced design. For each scenario, 1000 repeats were generated. The differences between the estimated and the true cut-off points (biases) were calculated. All these methods overestimated the true cut-off point, but the median biases of the methods were varying. For the unbalanced design, the same result was relevant but for the balanced design, the minimum P-value approach had a median bias as 0 while others have 1.
The aim of this study was to examine the accuracy of conventionally used method-optimal cutoff of Receiver Operating Characteristic (ROC) curve- to determine the minimum clinically important difference (MCID), which is the estimator of responsiveness for scales, by a simulation study. The baseline person parameters were firstly generated and, by using these values, two gold standard groups were constructed as “improved” and “non-improved” after the treatment. Five point-likert response patterns were obtained for 20 items in each group, representing pre- and post-treatment responses of individuals. The mean change score between post treatment and baseline scores for the improved group was considered as real MCID (MCIDR), after baseline and post-treatment total scores were calculated from response patterns. The cut-off for change score specified by ROC analysis, which best discriminates between improved group and not improved group, MCIDROC, was compared to MCIDR. The scenarios of simulation were consisted of sample size and distribution of total scores for improved group. The data were generated for each of 40 scenarios with 1000 MCMC repeats. It was observed that the MCIDR and MCIDROC were not so affected by sample size. However, MCIDROC overestimated the MCIDR values in all scenarios. Briefly, the cut-off points obtained by ROC analysis found to be greater than the real MCID values. Therefore, alternative methods are required to calculate MCID.