Is a Multi-Slider Interface Layout Responsible for a Stimulus Spacing Bias in the MUSHRA Test?

Open access

Abstract

The multi-stimulus test with hidden reference and anchors (MUSHRA) is commonly used for subjective quality assessment of audio systems. Despite its wide acceptance in scientific and industrial sectors, the method is not free from bias. One possible source of bias in the MUSHRA method may be attributed to a graphical design of its user interface. This paper examines the hypothesis that replacement of the standard multi-slider layout with a single-slider version could reduce a stimulus spacing bias observed in the MUSHRA test. Contrary to the expectation, the aforementioned modification did not reduce the bias. This outcome formally supports the validity of using multiple sliders in the MUSHRA graphical interface.

1. Bech S. (1992), Selection and training of subjects for listening tests on sound reproducing equipment, J. Audio Eng. Soc., 40, 590-610.

2. Beresford K., Ford N., Rumsey F., Zieliński S. (2006), Contextual Effects on Sound Quality Judgements: Part II Multi-Stimulus vs. Single Stimulus Method, Presented at the 121st Convention of the Audio Engineering Society, Paper 6913.

3. Berg J., Bustad Ch., Jonsson L., Mossberg L., Nyberg D. (2013), Perceived Audio Quality of Realistic FM and DAB+ Radio Broadcasting Systems, J. Audio Eng. Soc., 61, 755-777.

4. Blauert J., Jekosch U. (2012), A Layer Model of Sound Quality, J. Audio Eng. Soc., 60, 4-12.

5. Christie D. (2008), On the Effect of Slider Presentation within the MUSHRA Test, Final Year Tonmeister Technical Project, Institute of Sound Recording, University of Surrey.

6. EBU Tech 3296 Technical Document (2003), EBU subjective listening tests on low-bitrate audio codecs, European Broadcasting Union, Geneva, Switzerland.

7. EBU Tech 3324 Technical Document (2007), EBU evaluations of multichannel audio codecs, European Broadcasting Union, Geneva, Switzerland.

8. ITU-R Rec. BS.1534-2 (2001-2014), Method for the Subjective Assessment of Intermediate Quality Level of Coding Systems, International Telecommunications Union, Geneva, Switzerland.

9. ITU-T Rec. P.800 (1996), Methods for objective determination of transmission quality, International Telecommunications Union, Geneva, Switzerland.

10. Howell D.C. (1997), Statistical Methods for Psychology, Duxbury, New York.

11. Lawless H.T., Heymann H. (1998), Sensory Evaluation of Food, Kluwer-Plenum, London.

12. Lee S., Lee Y-T., Seo J., Baek M-S., Lim Ch-H., Park H. (2011), An Audio Quality Evaluation of Commercial Digital Radio Systems, IEEE Transactions on Broadcasting, 57, 629-636.

13. Levine T.R., Hullett C.R. (2002), Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research, Human Communication Research, 28, 612-625.

14. Liebetrau J. et al. (2014), Revision of Rec. ITU-R BS.1534, Presented at the 137th Convention of the Audio Engineering Society, Paper 9172, Los Angeles.

15. Mellers B.A., Birnbaum M.H. (1982), Loci of Contextual Effects in Judgment, Journal of Experimental Psychology: Human Perception and Performance, 8, 582-601.

16. Möller S. (2000), Assessment and Prediction of Speech Quality in Telecommunications, Kluwer Academic Publishers, London.

17. Neuendorf M. et al. (2013), The ISO/MPEG Unified Speech and Audio Coding Standard Consistent High Quality for All Content Types and at All Bit Rates, J. Audio Eng. Soc., 61, 956-977.

18. Olejnik S., Algina J. (2000), Measures of Effect Size for Comparative Studies: Applications, Interpretations, and Limitations, Contemporary Educational Psychology, 25, 241-286.

19. Olive S.E. (2003), Differences in Performance and Preference of Trained versus Untrained Listeners in Loudspeaker Tests: A Case Study, J. Audio Eng. Soc., 51, 806-825.

20. Poulton E.C. (1989), Bias in Quantifying Judgments, Lawrence Erlbaum, London.

21. Rumsey F., Zieliński S., Kassier R., Bech S. (2005), Relationships between experienced listener ratings of multichannel audio quality and na¨ıve listener preferences, J. Acoust. Soc. Am., 117, 3832-3840.

22. Schinkel-Bielefeld N., Lotze N., Nagel F. (2013), Audio quality evaluation by experienced and inexperienced listeners, Proceeding of Meeting on Acoustics, 19, ICA, Montreal, Canada.

23. Schmider E., Ziegler M., Danay E., Beyer L., B¨uhner M. (2010), Is It Really Robust? Reinvestigating the Robustness of ANOVA Against Violations of the Normal Distribution Assumption, Methodology European Journal of Research Methods for the Behavioral and Social Sciences, 6, 4, 147-151.

24. Soulodre G.A., Lavoie M.C. (1999), Subjective Evaluation of Large and Small Impairments in Audio Codecs, Presented at the 17th Audio Engineering Society International Conference: High-Quality Audio Coding, Florence.

25. Wickelmaier F., Umbach N., Sergin K., Choisel S. (2012), Scaling sound quality using models for paired-comparison and ranking data, Presented at DAGA 2012 Congress, Germany.

26. Zieliński S., Hardisty P., Hummersone C., Rumsey F. (2007), Potential Biases in MUSHRA Listening Tests, Presented at the 123rd Convention of the Audio Engineering Society, Paper 7179, New York.

27. Zieliński S., Rumsey F., Bech S. (2003), Effects of Down-Mix Algorithms on Quality of Surround Sound, J. Audio Eng. Soc., 51, 780-798.

28. Zieliński S., Rumsey F., Bech S. (2008), On Some Biases Encountered in Modern Audio Quality Listening Tests A Review, J. Audio Eng. Soc., 56, 427-451.

Archives of Acoustics

The Journal of Institute of Fundamental Technological of Polish Academy of Sciences

Journal Information


IMPACT FACTOR 2016: 0.816
5-year IMPACT FACTOR: 0.835

CiteScore 2016: 1.15

SCImago Journal Rank (SJR) 2016: 0.432
Source Normalized Impact per Paper (SNIP) 2016: 0.948

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 230 173 22
PDF Downloads 83 69 7