Nawka, TadeusTadeusNawkaKonerding, UweUweKonerding2019-09-192014-07-282014https://fis.uni-bamberg.de/handle/uniba/6345OBJECTIVES/HYPOTHESES: To investigate the interrater reliability of stroboscopy evaluations assessed using Poburka's Stroboscopy Evaluation Rating Form (SERF) STUDY DESIGN Single-factor experiment with repeated measures on the same element. METHODS Evaluations of nine experts pertaining to 68 stroboscopy recordings and 16 SERF variables were analyzed. For the 14 SERF variables possessing interval scale level, interrater reliability was investigated using the intraclass correlations for absolute agreement (ICC-a) and consistency (ICC-c). ICCs-c were computed for both original values and values standardized with respect to raters' means and standard deviations (ipsative values). For the two nominally scaled SERF variables, "vertical level" and "glottal closure" interrater reliability was investigated using kappa coefficients. RESULTS: For evaluations of single raters, ICCs-a ranged from 0.32 to 0.71, ICCs-c for original values from 0.41 to 0.72, and ICCs-c for ipsative values from 0.43 to 0.72. For mean evaluations of two raters, the corresponding values were 0.48 to 0.83 for ICCs-a, 0.58 to 0.84 for ICCs-c for original values, and 0.60 to 0.84 for ICCs-c for ipsative values. The interval scale variables with the lowest interrater reliabilities were phase closure, phase symmetry, and regularity. The kappa coefficients for vertical level and glottal closure were 0.15 and 0.38, respectively. CONCLUSIONS: The interrater reliabilities for vertical level, glottal closure, phase closure, phase symmetry, and regularity are so low that these variables should not be assessed via stroboscopy. For the remaining variables, adequate reliability can be obtained by aggregating evaluations from at least two raters.engStroboscopy–Laryngealexamination–Voicediagnostics–Interraterreliability–Intraclass correlationThe inter-rater reliability of stroboscopy evaluationsarticleurn:nbn:de:bvb:473-irb-63458