Where Should I Start? On Default Values for Slider Questions in Web Surveys

Publication Abstract

Liu, Mingnan, and Frederick G. Conrad. 2019. "Where Should I Start? On Default Values for Slider Questions in Web Surveys." Social Science Computer Review, 37(2): 248-269.

Web surveys have expanded the set of options available to questionnaire designers. One new option is the make it possible to administer questions that respondents can answer by moving an on-screen slider to the position on a visual scale that best reflects their position on an underlying dimension. One attribute of sliders is, that is, not well understood is how the position of the slider when the question is presented can affect responses?for better or worse. Yet the slider?s default position is under the control of the designer and can potentially be exploited to maximize the quality of the responses (e.g., positioning the slider by default at the midpoint on the assumption that this is unbiased). There are several studies in the methodology literature¬ that compare data collected via sliders and other methods, but relatively little attention has been given to the issue of default slider values. The current article reports findings from four web survey experiments (n = 3,744, 490, 697, and 902) that examine whether and how the default values of the slider influence responses. For 101-point questions (e.g., feeling thermometers), when the slider default values are set to be 25, 50, 75, or 100, significantly more respondents choose those values as their answers which seems unlikely to accurately reflect respondents? actual position on the underlying dimension. For 21- and 7-point scales, there is no significant or consistent impact of the default slider value on answers. The completion times are also similar across default values for questions with scales of this type. When sliders do not appear by default at any value, that is, the respondent must click or touch the scale to activate the slider, the missing data rate is low for 21- and 7-point scales but relatively higher for the 101-point scales. Respondents? evaluation of the survey difficulty and their satisfaction level with the survey do not differ by the default values. The implications and limitations of the findings are discussed.; Web surveys have expanded the set of options available to questionnaire designers. One new option is the make it possible to administer questions that respondents can answer by moving an on-screen slider to the position on a visual scale that best reflects their position on an underlying dimension. One attribute of sliders is, that is, not well understood is how the position of the slider when the question is presented can affect responses?for better or worse. Yet the slider?s default position is under the control of the designer and can potentially be exploited to maximize the quality of the responses (e.g., positioning the slider by default at the midpoint on the assumption that this is unbiased). There are several studies in the methodology literature¬ that compare data collected via sliders and other methods, but relatively little attention has been given to the issue of default slider values. The current article reports findings from four web survey experiments (n = 3,744, 490, 697, and 902) that examine whether and how the default values of the slider influence responses. For 101-point questions (e.g., feeling thermometers), when the slider default values are set to be 25, 50, 75, or 100, significantly more respondents choose those values as their answers which seems unlikely to accurately reflect respondents? actual position on the underlying dimension. For 21- and 7-point scales, there is no significant or consistent impact of the default slider value on answers. The completion times are also similar across default values for questions with scales of this type. When sliders do not appear by default at any value, that is, the respondent must click or touch the scale to activate the slider, the missing data rate is low for 21- and 7-point scales but relatively higher for the 101-point scales. Respondents? evaluation of the survey difficulty and their satisfaction level with the survey do not differ by the default values. The implications and limitations of the findings are discussed.

10.1177/0894439318755336

Keywords:
Methodology

Browse | Search | Next

PSC In The News

RSS Feed icon

Shaefer comments on the Cares Act impact in negating hardship during COVID-19 pandemic

Heller comments on lasting safety benefit of youth employment programs

More News

Highlights

Dean Yang's Combatting COVID-19 in Mozambique study releases Round 1 summary report

Help Establish Standard Data Collection Protocols for COVID-19 Research

More Highlights


Connect with PSC follow PSC on Twitter Like PSC on Facebook