Vass C, Davison N, Payne K. The role of training materials in healthcare stated preference studies: improved response efficiency? Presented at the 2018 Society for Medical Decision Making (SMDM) 17th Biennial European Meeting; June 12, 2018. Lieden, The Netherlands.


PURPOSE: To understand if, and how, training materials affect stated preferences elicited using discrete choice experiments (DCE).

METHOD(S): An online DCE was designed and piloted to elicit public preferences (recruited via an internet panel provider) for a targeted approach to using biologics (algorithm-based ‘biologic calculator’) compared with conventional ‘trial-and-error’ prescribing. The DCE comprised five attributes: delay to starting treatment; positive predictive value; negative predictive value; risk of infection; and cost saving to the National Health Service. Respondents were randomised to receive information about rheumatoid arthritis (RA), treatments, conventional prescribing and the biologic calculator as either: (survey-A) text or (survey-B) an animated storyline. The unlabelled DCE was blocked into four surveys. Each survey contained six choice-sets with three alternatives for prescribing: two biologic calculators and a conventional approach (opt-out). The design, generated using Ngene, incorporated a test for monotonicity. Background questions included socio-demographics, and self-reported measures of difficulty and attribute non-attendance (ANA). DCE data were analysed using standard and heteroskedastic conditional logit models (HCLM) allowing the scale parameter to be a function of the individual’s characteristics including the type of training materials received.

RESULT(S): Three-hundred members of the public completed the DCE receiving either survey-A (n=158) or survey-B (n=142). The results of the conditional logit models showed all attributes were statistically significant and in line with a priori expectations. Respondents preferred the new targeted approach to prescribing with a negative and significant alternative specific constant for the opt-out. The results also showed those who received the storyline (survey-B) had larger estimated coefficients, suggesting they were more sensitive to the attributes. However, the HCLM showed a statistically significant (p<0.005) scale term, indicating those who received the text (survey-A) version were more random in their choices. Further statistical tests, after accounting for differences in scale, suggested no differences in preferences between respondents completing survey-A or survey-B. Respondents who completed the text (survey-A) version had lower rates of self-reported ANA to all attributes, apart from cost. There was no difference in failure of the monotonicity test between the two survey versions.

CONCLUSION(S): Using engaging (animated) training materials improved observed choice consistency but did not appear to bias preferences. Improved consistency may allow researchers to use a smaller sample size or more choice sets, improving the efficiency of their studies.

Share on: