FormalPara Key Points for Decision Makers

Stated preference research can significantly benefit from incorporating technology. Interactive interfaces, multimedia elements and gamification techniques can enhance respondent engagement, leading to more robust data and, potentially, improved response rates.

Researchers need to be aware of the investment required (e.g. time and budget), challenges in ensuring data quality (e.g. avoiding fraudulent responses) and other potential pitfalls (e.g. exclusion of specific groups).

This article explores opportunities for stated preference researchers to use technology to (1) improve the design of their surveys, (2) more efficiently and effectively administer instruments and collect data, (3) better understand the underlying behaviours of respondents and (4) advance the application of data to preference-sensitive decisions in the real world.

1 Introduction

The interest in quantifying preferences of patients, healthcare professionals and other stakeholders for health and healthcare continues to grow [1]. Although various methods exist, discrete choice experiments (DCEs) remain the most popular [2]. The first DCEs in health were published in the early 1990s [3], and since then technology has brought new opportunities to support preference researchers in their studies. An obvious technological development that has benefited stated preference researchers is the additional computing power that has made estimation of more sophisticated experimental designs and choice models easier and faster. However, technological developments could benefit many other aspects of stated preference research.

Although quality is a multi-faceted concept, minimising bias is arguably a crucial part of conducting a good-quality preference study. Stated preference studies are susceptible to multiple biases including: hypothetical bias; information bias; cognitive biases, such as anchoring, framing and other effects; and sampling bias from non-response, among others [4, 5]. In addition to minimising bias, good quality scientific studies typically aim to be valid (i.e. not erroneously inferring preferences from other behaviours such as simplifying heuristics), reliable (i.e. reproducible by other study teams) [6] and generalisable so the results can be transferred and applied in real world preference-sensitive scenarios [7].

Controlling for bias is already inherent in some stages of stated preference study design: studies typically employ experimental designs with statistical properties that aim to elicit causal preferences (i.e. the effect of an attribute change) and are increasingly analysed with preference models that can accommodate observed and unobserved heterogeneity and reduce bias in the estimated parameters [1, 8]. Computing technology now means complex experimental designs and econometric models can be estimated more efficiently. Previously, researchers needed to be able to programme the procedure into computer software but there is now freely accessible code, and many designs and models are available in commercial software packages such as Stata and specialist programmes such as Ngene [9,10,11,12]. The ease of estimation has allowed researchers to explore more powerful and flexible models, and these are increasingly seen in the published health literature [13]. In this article, we do not describe further how computational advances have impacted discrete choice analysis, except briefly in the examination of artificial intelligence (AI).

Instead of focusing on improvements in computing power, we focus on reviewing technologies that can be used to improve the quality of a study in terms of its design, administration and accessibility. This review provides an overview of the key technological developments for stated preference studies that may help with survey development, survey administration, applying economic theory and generalisability to the real world. Therefore, the article is structured as follows: (1) using technology to improve survey design and quality; (2) technology for administering survey instruments and collecting choice data; (3) technology to understand behaviour; and (4) technology to aid the interpretation and application of choice and preferences in the real world.

In these respective stages, we highlight opportunities for technology to minimise bias (e.g. to help inform respondents and reduce hypothetical bias; to prepare survey materials with complex medical information in a way participants can understand and digest to reduce information bias; to produce choice sets that reduce framing effects; and improve the presentation and compatibility of surveys to reduce sampling bias). We present technologies that can be used to understand if the results are valid and reliable by exploring whether the underlying economic theories hold, and we briefly explore how technology can be used to help facilitate real world decision making using interactive decision aids.

This review intends to introduce readers to different technologies and their potential use in preference research. We direct readers to sources where they can gain a deeper understanding and, potentially, apply the technology themselves. For each technology, we provide examples known to the authors of the technology being implemented, describe the benefits and summarise the potential limitations. As some technologies are emergent, the empirical evidence supporting their use in preference research is limited and drawing conclusions from a handful of studies can be challenging. We therefore conclude the review with a critical reflection on the benefits and limitations of implementing (often costly) technology alongside stated preference studies.

2 Using Technology in Survey Design

Stated preference surveys in health can require respondents to understand a large amount of unfamiliar and complex medical information. Many attribute-based preference methods rely on respondents understanding attributes (and levels), the clinical context of the decision and retaining the information to make choices in subsequent choice sets. Helping respondents make informed choices leads to the collection of more robust (reliable) data and could minimise information and hypothetical bias. It could also help reduce selection bias as if respondents lose interest or feel overwhelmed by the type and amount of information, they may drop out.

The trend of administering surveys online is not unique to preference research [1, 14], and is likely driven by the desire to increase accessibility, improve the convenience (and cost) of collecting and handling data, and create a comfortable environment to obtain honest responses [15]. This online administration provides an opportunity to incorporate digital materials within their training materials. Table 1 provides an overview of different technologies that may be useful tools for communicating information in a stated preference study. In the following subsections, these technologies are discussed in more detail.

Table 1 Summary of technologies available for survey design

2.1 Videos and Animations

The rationale for including videos is typically based on the premise that they are more engaging and interesting than text and make the concepts easier to digest [16]. Preference researchers are increasingly adopting videos and animations to introduce disease and treatment context and a number of studies have examined the effects of video (vs text) on choice behaviour, response time and survey satisfaction [17,18,19].

For example, Lim et al. [18] compared respondents’ understanding of attribute information and preferences for ovarian cancer treatments between the text type and video type in a DCE preference study. Women aged 18–89 years, and diagnosed with ovarian, fallopian tube or primary peritoneal cancer were randomised to receive text or a video comprising a slide set of still images with a voiceover. This study reported no statistical differences in preferences (or in the error variance) between the text or video options. For comprehension questions, the mean number of correct responses was higher among those who watched the video, but the difference was not statistically significant. In addition, there was no difference in response time between the two information types. In another DCE study, Smith et al. [19] investigated the effect of video-based educational materials on preferences for glucose monitoring technology. Respondents with a self-reported diagnosis of diabetes (type 1, type 2 or other), aged ≥ 18 years in the Netherlands and Poland were randomised to the text (11-minute read written at an eighth grade reading level) or the video (9-to-10-minute animated storyline with voiceover). Among the 981 respondents, 261 in Poland and 233 in the Netherlands received the video. This study identified differences between samples when comparing the preference estimates and attribute relative importance score, but these differences were not statistically significant within each country. Feedback on the ease of understanding and completion was more positive for those who received the video, but the difference was small and not statistically significant. Further, there was no statistical difference in response time.

These studies examining the effect of video-based information do not clearly demonstrate a positive or negative effect on the data collected in stated preference surveys. It is inconclusive in part owing to the relatively small literature empirically comparing approaches. For preference researchers, we believe the effect of including a video (compared with no video) may be affected by the design of the materials (their patient centeredness), study context, research questions, measures to determine the impact, target population, sample size and methods used to evaluate the effect size.

2.2 Serious Games

Serious games are like “regular” games, but the objective is typically to train or educate, rather than for pure enjoyment. Like videos, serious games are thought to be more engaging and easier to follow than text [20]. Serious games arguably build on videos, with an interactive element that is more immersive for participants and provides opportunities for active learning with sustained learning benefits.

Vass et al. [21] investigated whether, and how, plain text compared with a serious game  when evaluating the presentation of information  on respondents’ choices and heuristics in the preference study for biologics in rheumatoid arthritis using a DCE. Respondents from the general public were randomised to receive plain text or a serious game (an interactive animated storyline with an avatar) to learn about a prescribing algorithm to guide the selection of a first-line biologic in rheumatoid arthritis. The study found no effect on preference weights by the types; however, the scale term was statistically significant indicating that respondents who received plain-text materials had more randomness in their choices. There were positive indicators for the serious game (confidence, ease of choice making) but these were not statistically significant. There was also no statistically significant difference in response time.

For both videos and serious games, there appear (very) modest effects for these types of technologies in training materials. Further research is needed in larger sample sizes, particularly if the effect sizes are small, with appropriate causal inference methods (such as matching or weighting) if randomisation of respondents to tools is not possible [22]. Evidence from other disciplines suggests video-based and game-based learning tools may be particularly beneficial to certain populations (e.g. children, those with cognitive impairments) [23] or for certain research questions (e.g. particularly new or complex concepts) [24], and logically these benefits may also apply in stated preference studies. We observe few “harms” or negative consequences to including video or game technologies, so researchers whose budget and timeline allow should consider incorporating these into their survey materials.

3 Using Technology in Data Collection

3.1 Flexible Formats and Virtual Reality

Although DCE surveys often make use of a table-choice task format where the columns reflect hypothetical alternatives and the rows represent attribute levels, the digitisation of surveys provides an opportunity to use visualisation techniques to enhance respondents’ decision making. The complexity of choice tasks from ‘difficult’ experimental designs has been shown to increase confusion and misunderstanding, reduce the level of choice consistency, increase the likelihood of respondent fatigue, encourage simplifying choice heuristics and increase the survey drop-out rate [25,26,27]. One way to reduce the task complexity and cognitive burden is the visual presentation of choice tasks.

In its simplest form, highlighting attribute levels may even be helpful for respondents. In an online DCE experiment, Jonker et al. [28] found colour coding significantly reduces task complexity, increases choice consistency and minimises learning effects. Technology can now be used to go beyond the tabulated design to make alternatives better reflect the way respondents might make choices in the real world. SurveyEngine presents two food-based case studies to investigate preferences for various attributes (brand, size, price) and create realistic mock-ups of packing (and purchasing visual) for the choice experiments (see example 1, example 2).

Using virtual reality technology to immerse participants in the decision-making scenario might be another way to add realism and minimise hypothetical bias and increase the external validity of stated preference studies. There is an emerging literature from outside the healthcare field that suggests visualisation techniques can enhance respondents’ decision making, such as 360 videos with immersion and photographs [29], three-dimensional images [30] and immersive virtual reality environments [31, 32]. A study (outside of health) investigated an immersive virtual reality environment delivered to respondents via a headset. A split-sample experiment of three different presentation formats (text only; video; virtual reality) found that (1) respondent certainty can be increased by employing more immersive visualisation techniques such as virtual reality, and that (2) the presentation format has a significant impact on preference estimates, and may affect respondents’ willingness to pay and the rank ordering of alternatives [33]. Another preference study using a DCE to understand pedestrian behaviour found virtual reality added realism and seemed to improve a respondent’s cognitive understanding of complex elements of the task, when compared with text-only experiments [34].

There is limited published evidence of how (or if) virtual reality technologies are being used by preference researchers in healthcare [35]. However, it can be useful to represent affective and dynamic dimensions related to the flow of objects and people and to deliver experiences involving immersive and engaging interactivity. For example by representing medical facilities (i.e. by applying omnidirectional stimuli to make one feel that they are there), healthcare professionals (i.e. by applying multimodality sensory stimuli to make one feel that they are interacting with someone) and interventions (i.e. by applying rich sensory stimuli to make one feel that they are having a real experience).

3.2 Improving Reach and Optimising for Different Devices

There has been a significant shift from the past, when preference surveys were typically conducted via mail or in face-to-face interviews, to predominantly online administration [1, 3]. Online administration allows researchers to reach a larger number of and wider range of people, potentially improving the representativeness of the data. This is especially important when studying populations that are harder to reach through traditional methods, such as marginalised groups or people living in remote areas. Optimising the survey for different devices including tablets and smart mobile phones could further improve the reach of the survey.

However, emerging evidence suggests that the type of device used to access a DCE may not have a significant impact on the sample composition or the results of the study. Vass and Boeri [36] found that the device used to access a stated preference survey did not affect respondents’ preferences or observable choice behaviour. The study presented a DCE comprising three attributes and three alternatives, a question format that was easily viewed on smaller touchscreens. However, investigations outside of health have also found stated preferences are mostly robust to the device type [37], suggesting an opportunity to improve the sample reach by allowing respondents to complete the survey on their preferred device without fear of compromising the quality of data.

An important consideration is whether the use of technology prevents participation from specific groups and thus reduces the generalisability of findings to wider populations. For instance, the ‘digital divide’ recognises that for some a lack of access to technologies (such as smart mobile phone devices) may contribute to inequalities in healthcare [38]. Preference researchers may need to consider whether this could impact their study sample and if the use of technologies may result in the exclusion of preferences from groups with poor digital literacy or access to technologies. Discussion with patient and public involvement groups may help to determine whether this would be an issue [39]. Selection bias may be introduced as a result of the recruitment methods (e.g. if online panels are chosen for data collection). Generally, the evidence suggests that there is not a statistically significant difference in results elicited through face-to-face interviews versus online administration, especially when background characteristics are controlled for [40,41,42]. Typically, online survey companies do allow researchers to specify quotas for survey completion, which allows for diverse samples to be recruited [43]. In some areas, researchers may be able to learn from studies focused on trial recruitment to ensure better representation in surveys, for example when people with disabilities are included and require additional support to complete surveys [44].

3.3 Detecting Fraudulent Responses

Administering stated preference surveys online has many advantages; however, hosting survey instruments on the web increases the exposure to and risk of fraudulent responses from either humans or “bots” seeking to manipulate the study results or to collect the participation incentive [45]. In a DCE survey, Gonzalez et al. [46] used various tests to identify “bad actors” who could be affecting the quality of data collected and find evidence of computer human bots in their dataset. Gonzalez et al. highlight that 40% of all Internet use is now attributed to bots, which may be a concern for preference researchers administering their surveys online, and they suggest incorporating traps like CAPTCHAs or honeypots (to see if white-text questions are answered). Outside of preference research, there exist recommendations for preventing online survey fraud including: careful distribution of the links, Internet Protocol address logging, analysis of timestamps to identify respondents in countries outside of the survey’s target geography, telephone screening and validity questions [47].

4 Using Technology to Understand Choice Behaviour

Technology can be used to better understand not just what people choose, but how and why they make those choices. Illuminating choice-making behaviour in preference studies is important in understanding whether the underlying economic theories hold. For example, most DCEs are analysed under random utility theory and the associated assumptions about choice behaviour. However, respondents may violate these assumptions by applying simplifying heuristics [48]. Without technology, researchers can rely on qualitative accounts or simple questions to illuminate respondents’ decision making and reconcile the choices they observe with preferences that are being indirectly measured [49, 50]. However, qualitative accounts rely on respondents reliably and accurately relaying their decision-making processes when asked: think-aloud exercises add a layer of complexity as respondents must talk about their thought processes whilst completing the choice tasks, and debriefing questions after the survey rely on respondents’ recall [51]. This section explores technologies that may provide an arguably more objective opportunity to explore respondents’ decision-making processes and understand the extent to which observed behaviours align with theory [35].

4.1 Eye-Tracking and Related Technologies

Examples of eye-tracking studies in preference research are increasing [52,53,54,55] and critical reviews of the technology for health preference research already exist [35]. Rigby et al. [35] reviewed related technologies including how stated preference studies have been conducted with mouse tracing and brain monitoring via electroencephalography and functional magnetic resonance imaging. Most applications have sought to better understand how respondents process information in the survey and develop and apply decision rules, including attribute non-attendance, and to determine if observed behaviours are heuristics or preferences [52,53,54,55,56]. Studies have also combined preference data and data from the technology to improve the empirical performance of the estimated choice models [53, 57].

Aside from mouse tracing, these technologies may be very costly and difficult to implement given some require participants to travel to a location to conduct the survey whilst being monitored. For these reasons, whilst providing useful information, these technologies are unlikely to become routinely used in preference research. However, the data generated from using this equipment may provide insights that can be generalised to the wider preference community; for example, how unimportant attributes are attended to or ignored. However, further research is required to understand whether studies conducted in laboratory settings are generalisable [58].

5 Using Technology for Analysis and Application

5.1 Artificial Intelligence and Machine Learning

The use of AI and AI-related approaches in the analysis of preference data is in continuous development [59]. The area has been growing since the exploration of artificial neural networks in the early 2000s [60]. Machine learning techniques, including neural networks and gradient boosting trees [61, 62], which can continuously update, have been examined as alternative data-driven approaches to traditional choice models but there is mixed evidence regarding their performance [59]. In theory, AI tools could be trained in the coming years to help the analyst better identify patterns and explore and interpret choice data in different ways.

5.2 Values Clarification in Decision Aids

Technological advances can and have helped the design and implementation of decision aids by allowing them to be more interactive and providing the opportunity for personalisation to the individual making the decision [63, 64]. A key step in personalising the decision aid is including a values clarification component, which can (and increasingly are) preference based [65]. Incorporating preferences into decision aids is driven by the rationale that patients may make better decisions if their values are incorporated in the process and has been shown to reduce decisional conflict among other outcomes [66]. Furthermore, collecting individual-level preference data within the decision aid presents an opportunity to engage the physician by revealing the patients’ perspective to support their treatment recommendations, encouraging shared decision making. There are a number of examples of preferences-based decision aids, for example, the DCIDA has been used to improve decision making about medications in individuals with suspected knee osteoarthritis, by personalising the information to align with the patient’s preferences [67]. In addition to decision aids, individual-level preference data can be retrieved with methods such as the threshold technique [68] and the emerging multiple dimensional thresholding [69], as well as the Online Elicitation of Personal Utility Functions tool [70, 71]. These real-time results can be used to engage physicians to respond to the patient’s real world and immediate wants and needs.

6 Conclusions

In this paper, the authors explored how technological advancements can and have enabled survey researchers to collect and analyse data more easily, particularly in stated preference research. Although the evidence for the technologies is evolving, the paper summarised possible technologies that we consider promising to improve the outcome of preference research, resulting in more robust estimates. The inclusion of videos and animations in survey instruments, the use of virtual reality, the use of AI in both design and analysis, and the inclusion of real-time results in decision aids offer exciting possibilities for DCE researchers.

While comparisons of traditional and new approaches appear to be in favour of technology, this is an emerging area that makes drawing substantive conclusions challenging. We recognise some technologies have and will be widely adopted, while the use of others is constrained by practical challenges such as the cost, size or portability of the equipment or the need for specialist skills. Innovation is continuously driving the technology frontier forward and we optimistically anticipate that costs will diminish, and new developments will help support preference researchers to conduct high-quality studies. To address these challenges and doubts, the inclusion of technological advances should be achieved gradually and carefully, by designing studies with split samples in which one group is assigned a survey with one advanced feature and one group is used as control with a traditional survey instrument. No more than one advanced feature should be included for each subgroup, allowing the effect to be studied. Results should be examined, and researchers should decide whether the impact is reducing or increasing bias and whether it is beneficial in answering their research questions.

In the meantime, we encourage preference researchers to think critically about the need and implications of using new technology. Although some technologies appear to be worth the investment (of time and money), improvements in data quality may be achieved through other means (e.g. increased incentives, qualitative research). Additionally, attempting to address and suppress one bias may inadvertently introduce others. For example, in seeking to reduce information bias by providing easily digestible information about a health intervention, researchers risk “over-educating” the sample and reducing the external validity of the results. Likewise, in making surveys more accessible for one population, we may reduce the appeal for others. Finally, as health preference studies are most often administered to patient samples [1], any shift in how surveys are designed and administered should consider the accessibility and functionality of the technology in the respective disease area.