Abstract
The interest in quantifying stated preferences for health and healthcare continues to grow, as does the technology available to support and improve health preference studies. Technological advancements in the last two decades have implications and opportunities for preference researchers designing, administering, analysing, interpreting and applying the results of stated preference surveys. In this paper, we summarise selected technologies and how these can benefit a preference study. We discuss empirical evaluations of the technology in preference research, with examples from health where possible. The technologies reviewed include serious games, virtual reality, eye tracking, innovative formats and decision aids with values clarification components. We conclude with a critical reflection on the benefits and limitations of implementing (often costly) technology alongside stated preference studies.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Stated preference research can significantly benefit from incorporating technology. Interactive interfaces, multimedia elements and gamification techniques can enhance respondent engagement, leading to more robust data and, potentially, improved response rates. |
Researchers need to be aware of the investment required (e.g. time and budget), challenges in ensuring data quality (e.g. avoiding fraudulent responses) and other potential pitfalls (e.g. exclusion of specific groups). |
This article explores opportunities for stated preference researchers to use technology to (1) improve the design of their surveys, (2) more efficiently and effectively administer instruments and collect data, (3) better understand the underlying behaviours of respondents and (4) advance the application of data to preference-sensitive decisions in the real world. |
1 Introduction
The interest in quantifying preferences of patients, healthcare professionals and other stakeholders for health and healthcare continues to grow [1]. Although various methods exist, discrete choice experiments (DCEs) remain the most popular [2]. The first DCEs in health were published in the early 1990s [3], and since then technology has brought new opportunities to support preference researchers in their studies. An obvious technological development that has benefited stated preference researchers is the additional computing power that has made estimation of more sophisticated experimental designs and choice models easier and faster. However, technological developments could benefit many other aspects of stated preference research.
Although quality is a multi-faceted concept, minimising bias is arguably a crucial part of conducting a good-quality preference study. Stated preference studies are susceptible to multiple biases including: hypothetical bias; information bias; cognitive biases, such as anchoring, framing and other effects; and sampling bias from non-response, among others [4, 5]. In addition to minimising bias, good quality scientific studies typically aim to be valid (i.e. not erroneously inferring preferences from other behaviours such as simplifying heuristics), reliable (i.e. reproducible by other study teams) [6] and generalisable so the results can be transferred and applied in real world preference-sensitive scenarios [7].
Controlling for bias is already inherent in some stages of stated preference study design: studies typically employ experimental designs with statistical properties that aim to elicit causal preferences (i.e. the effect of an attribute change) and are increasingly analysed with preference models that can accommodate observed and unobserved heterogeneity and reduce bias in the estimated parameters [1, 8]. Computing technology now means complex experimental designs and econometric models can be estimated more efficiently. Previously, researchers needed to be able to programme the procedure into computer software but there is now freely accessible code, and many designs and models are available in commercial software packages such as Stata and specialist programmes such as Ngene [9,10,11,12]. The ease of estimation has allowed researchers to explore more powerful and flexible models, and these are increasingly seen in the published health literature [13]. In this article, we do not describe further how computational advances have impacted discrete choice analysis, except briefly in the examination of artificial intelligence (AI).
Instead of focusing on improvements in computing power, we focus on reviewing technologies that can be used to improve the quality of a study in terms of its design, administration and accessibility. This review provides an overview of the key technological developments for stated preference studies that may help with survey development, survey administration, applying economic theory and generalisability to the real world. Therefore, the article is structured as follows: (1) using technology to improve survey design and quality; (2) technology for administering survey instruments and collecting choice data; (3) technology to understand behaviour; and (4) technology to aid the interpretation and application of choice and preferences in the real world.
In these respective stages, we highlight opportunities for technology to minimise bias (e.g. to help inform respondents and reduce hypothetical bias; to prepare survey materials with complex medical information in a way participants can understand and digest to reduce information bias; to produce choice sets that reduce framing effects; and improve the presentation and compatibility of surveys to reduce sampling bias). We present technologies that can be used to understand if the results are valid and reliable by exploring whether the underlying economic theories hold, and we briefly explore how technology can be used to help facilitate real world decision making using interactive decision aids.
This review intends to introduce readers to different technologies and their potential use in preference research. We direct readers to sources where they can gain a deeper understanding and, potentially, apply the technology themselves. For each technology, we provide examples known to the authors of the technology being implemented, describe the benefits and summarise the potential limitations. As some technologies are emergent, the empirical evidence supporting their use in preference research is limited and drawing conclusions from a handful of studies can be challenging. We therefore conclude the review with a critical reflection on the benefits and limitations of implementing (often costly) technology alongside stated preference studies.
2 Using Technology in Survey Design
Stated preference surveys in health can require respondents to understand a large amount of unfamiliar and complex medical information. Many attribute-based preference methods rely on respondents understanding attributes (and levels), the clinical context of the decision and retaining the information to make choices in subsequent choice sets. Helping respondents make informed choices leads to the collection of more robust (reliable) data and could minimise information and hypothetical bias. It could also help reduce selection bias as if respondents lose interest or feel overwhelmed by the type and amount of information, they may drop out.
The trend of administering surveys online is not unique to preference research [1, 14], and is likely driven by the desire to increase accessibility, improve the convenience (and cost) of collecting and handling data, and create a comfortable environment to obtain honest responses [15]. This online administration provides an opportunity to incorporate digital materials within their training materials. Table 1 provides an overview of different technologies that may be useful tools for communicating information in a stated preference study. In the following subsections, these technologies are discussed in more detail.
2.1 Videos and Animations
The rationale for including videos is typically based on the premise that they are more engaging and interesting than text and make the concepts easier to digest [16]. Preference researchers are increasingly adopting videos and animations to introduce disease and treatment context and a number of studies have examined the effects of video (vs text) on choice behaviour, response time and survey satisfaction [17,18,19].
For example, Lim et al. [18] compared respondents’ understanding of attribute information and preferences for ovarian cancer treatments between the text type and video type in a DCE preference study. Women aged 18–89 years, and diagnosed with ovarian, fallopian tube or primary peritoneal cancer were randomised to receive text or a video comprising a slide set of still images with a voiceover. This study reported no statistical differences in preferences (or in the error variance) between the text or video options. For comprehension questions, the mean number of correct responses was higher among those who watched the video, but the difference was not statistically significant. In addition, there was no difference in response time between the two information types. In another DCE study, Smith et al. [19] investigated the effect of video-based educational materials on preferences for glucose monitoring technology. Respondents with a self-reported diagnosis of diabetes (type 1, type 2 or other), aged ≥ 18 years in the Netherlands and Poland were randomised to the text (11-minute read written at an eighth grade reading level) or the video (9-to-10-minute animated storyline with voiceover). Among the 981 respondents, 261 in Poland and 233 in the Netherlands received the video. This study identified differences between samples when comparing the preference estimates and attribute relative importance score, but these differences were not statistically significant within each country. Feedback on the ease of understanding and completion was more positive for those who received the video, but the difference was small and not statistically significant. Further, there was no statistical difference in response time.
These studies examining the effect of video-based information do not clearly demonstrate a positive or negative effect on the data collected in stated preference surveys. It is inconclusive in part owing to the relatively small literature empirically comparing approaches. For preference researchers, we believe the effect of including a video (compared with no video) may be affected by the design of the materials (their patient centeredness), study context, research questions, measures to determine the impact, target population, sample size and methods used to evaluate the effect size.
2.2 Serious Games
Serious games are like “regular” games, but the objective is typically to train or educate, rather than for pure enjoyment. Like videos, serious games are thought to be more engaging and easier to follow than text [20]. Serious games arguably build on videos, with an interactive element that is more immersive for participants and provides opportunities for active learning with sustained learning benefits.
Vass et al. [21] investigated whether, and how, plain text compared with a serious game when evaluating the presentation of information on respondents’ choices and heuristics in the preference study for biologics in rheumatoid arthritis using a DCE. Respondents from the general public were randomised to receive plain text or a serious game (an interactive animated storyline with an avatar) to learn about a prescribing algorithm to guide the selection of a first-line biologic in rheumatoid arthritis. The study found no effect on preference weights by the types; however, the scale term was statistically significant indicating that respondents who received plain-text materials had more randomness in their choices. There were positive indicators for the serious game (confidence, ease of choice making) but these were not statistically significant. There was also no statistically significant difference in response time.
For both videos and serious games, there appear (very) modest effects for these types of technologies in training materials. Further research is needed in larger sample sizes, particularly if the effect sizes are small, with appropriate causal inference methods (such as matching or weighting) if randomisation of respondents to tools is not possible [22]. Evidence from other disciplines suggests video-based and game-based learning tools may be particularly beneficial to certain populations (e.g. children, those with cognitive impairments) [23] or for certain research questions (e.g. particularly new or complex concepts) [24], and logically these benefits may also apply in stated preference studies. We observe few “harms” or negative consequences to including video or game technologies, so researchers whose budget and timeline allow should consider incorporating these into their survey materials.
3 Using Technology in Data Collection
3.1 Flexible Formats and Virtual Reality
Although DCE surveys often make use of a table-choice task format where the columns reflect hypothetical alternatives and the rows represent attribute levels, the digitisation of surveys provides an opportunity to use visualisation techniques to enhance respondents’ decision making. The complexity of choice tasks from ‘difficult’ experimental designs has been shown to increase confusion and misunderstanding, reduce the level of choice consistency, increase the likelihood of respondent fatigue, encourage simplifying choice heuristics and increase the survey drop-out rate [25,26,27]. One way to reduce the task complexity and cognitive burden is the visual presentation of choice tasks.
In its simplest form, highlighting attribute levels may even be helpful for respondents. In an online DCE experiment, Jonker et al. [28] found colour coding significantly reduces task complexity, increases choice consistency and minimises learning effects. Technology can now be used to go beyond the tabulated design to make alternatives better reflect the way respondents might make choices in the real world. SurveyEngine presents two food-based case studies to investigate preferences for various attributes (brand, size, price) and create realistic mock-ups of packing (and purchasing visual) for the choice experiments (see example 1, example 2).
Using virtual reality technology to immerse participants in the decision-making scenario might be another way to add realism and minimise hypothetical bias and increase the external validity of stated preference studies. There is an emerging literature from outside the healthcare field that suggests visualisation techniques can enhance respondents’ decision making, such as 360 videos with immersion and photographs [29], three-dimensional images [30] and immersive virtual reality environments [31, 32]. A study (outside of health) investigated an immersive virtual reality environment delivered to respondents via a headset. A split-sample experiment of three different presentation formats (text only; video; virtual reality) found that (1) respondent certainty can be increased by employing more immersive visualisation techniques such as virtual reality, and that (2) the presentation format has a significant impact on preference estimates, and may affect respondents’ willingness to pay and the rank ordering of alternatives [33]. Another preference study using a DCE to understand pedestrian behaviour found virtual reality added realism and seemed to improve a respondent’s cognitive understanding of complex elements of the task, when compared with text-only experiments [34].
There is limited published evidence of how (or if) virtual reality technologies are being used by preference researchers in healthcare [35]. However, it can be useful to represent affective and dynamic dimensions related to the flow of objects and people and to deliver experiences involving immersive and engaging interactivity. For example by representing medical facilities (i.e. by applying omnidirectional stimuli to make one feel that they are there), healthcare professionals (i.e. by applying multimodality sensory stimuli to make one feel that they are interacting with someone) and interventions (i.e. by applying rich sensory stimuli to make one feel that they are having a real experience).
3.2 Improving Reach and Optimising for Different Devices
There has been a significant shift from the past, when preference surveys were typically conducted via mail or in face-to-face interviews, to predominantly online administration [1, 3]. Online administration allows researchers to reach a larger number of and wider range of people, potentially improving the representativeness of the data. This is especially important when studying populations that are harder to reach through traditional methods, such as marginalised groups or people living in remote areas. Optimising the survey for different devices including tablets and smart mobile phones could further improve the reach of the survey.
However, emerging evidence suggests that the type of device used to access a DCE may not have a significant impact on the sample composition or the results of the study. Vass and Boeri [36] found that the device used to access a stated preference survey did not affect respondents’ preferences or observable choice behaviour. The study presented a DCE comprising three attributes and three alternatives, a question format that was easily viewed on smaller touchscreens. However, investigations outside of health have also found stated preferences are mostly robust to the device type [37], suggesting an opportunity to improve the sample reach by allowing respondents to complete the survey on their preferred device without fear of compromising the quality of data.
An important consideration is whether the use of technology prevents participation from specific groups and thus reduces the generalisability of findings to wider populations. For instance, the ‘digital divide’ recognises that for some a lack of access to technologies (such as smart mobile phone devices) may contribute to inequalities in healthcare [38]. Preference researchers may need to consider whether this could impact their study sample and if the use of technologies may result in the exclusion of preferences from groups with poor digital literacy or access to technologies. Discussion with patient and public involvement groups may help to determine whether this would be an issue [39]. Selection bias may be introduced as a result of the recruitment methods (e.g. if online panels are chosen for data collection). Generally, the evidence suggests that there is not a statistically significant difference in results elicited through face-to-face interviews versus online administration, especially when background characteristics are controlled for [40,41,42]. Typically, online survey companies do allow researchers to specify quotas for survey completion, which allows for diverse samples to be recruited [43]. In some areas, researchers may be able to learn from studies focused on trial recruitment to ensure better representation in surveys, for example when people with disabilities are included and require additional support to complete surveys [44].
3.3 Detecting Fraudulent Responses
Administering stated preference surveys online has many advantages; however, hosting survey instruments on the web increases the exposure to and risk of fraudulent responses from either humans or “bots” seeking to manipulate the study results or to collect the participation incentive [45]. In a DCE survey, Gonzalez et al. [46] used various tests to identify “bad actors” who could be affecting the quality of data collected and find evidence of computer human bots in their dataset. Gonzalez et al. highlight that 40% of all Internet use is now attributed to bots, which may be a concern for preference researchers administering their surveys online, and they suggest incorporating traps like CAPTCHAs or honeypots (to see if white-text questions are answered). Outside of preference research, there exist recommendations for preventing online survey fraud including: careful distribution of the links, Internet Protocol address logging, analysis of timestamps to identify respondents in countries outside of the survey’s target geography, telephone screening and validity questions [47].
4 Using Technology to Understand Choice Behaviour
Technology can be used to better understand not just what people choose, but how and why they make those choices. Illuminating choice-making behaviour in preference studies is important in understanding whether the underlying economic theories hold. For example, most DCEs are analysed under random utility theory and the associated assumptions about choice behaviour. However, respondents may violate these assumptions by applying simplifying heuristics [48]. Without technology, researchers can rely on qualitative accounts or simple questions to illuminate respondents’ decision making and reconcile the choices they observe with preferences that are being indirectly measured [49, 50]. However, qualitative accounts rely on respondents reliably and accurately relaying their decision-making processes when asked: think-aloud exercises add a layer of complexity as respondents must talk about their thought processes whilst completing the choice tasks, and debriefing questions after the survey rely on respondents’ recall [51]. This section explores technologies that may provide an arguably more objective opportunity to explore respondents’ decision-making processes and understand the extent to which observed behaviours align with theory [35].
4.1 Eye-Tracking and Related Technologies
Examples of eye-tracking studies in preference research are increasing [52,53,54,55] and critical reviews of the technology for health preference research already exist [35]. Rigby et al. [35] reviewed related technologies including how stated preference studies have been conducted with mouse tracing and brain monitoring via electroencephalography and functional magnetic resonance imaging. Most applications have sought to better understand how respondents process information in the survey and develop and apply decision rules, including attribute non-attendance, and to determine if observed behaviours are heuristics or preferences [52,53,54,55,56]. Studies have also combined preference data and data from the technology to improve the empirical performance of the estimated choice models [53, 57].
Aside from mouse tracing, these technologies may be very costly and difficult to implement given some require participants to travel to a location to conduct the survey whilst being monitored. For these reasons, whilst providing useful information, these technologies are unlikely to become routinely used in preference research. However, the data generated from using this equipment may provide insights that can be generalised to the wider preference community; for example, how unimportant attributes are attended to or ignored. However, further research is required to understand whether studies conducted in laboratory settings are generalisable [58].
5 Using Technology for Analysis and Application
5.1 Artificial Intelligence and Machine Learning
The use of AI and AI-related approaches in the analysis of preference data is in continuous development [59]. The area has been growing since the exploration of artificial neural networks in the early 2000s [60]. Machine learning techniques, including neural networks and gradient boosting trees [61, 62], which can continuously update, have been examined as alternative data-driven approaches to traditional choice models but there is mixed evidence regarding their performance [59]. In theory, AI tools could be trained in the coming years to help the analyst better identify patterns and explore and interpret choice data in different ways.
5.2 Values Clarification in Decision Aids
Technological advances can and have helped the design and implementation of decision aids by allowing them to be more interactive and providing the opportunity for personalisation to the individual making the decision [63, 64]. A key step in personalising the decision aid is including a values clarification component, which can (and increasingly are) preference based [65]. Incorporating preferences into decision aids is driven by the rationale that patients may make better decisions if their values are incorporated in the process and has been shown to reduce decisional conflict among other outcomes [66]. Furthermore, collecting individual-level preference data within the decision aid presents an opportunity to engage the physician by revealing the patients’ perspective to support their treatment recommendations, encouraging shared decision making. There are a number of examples of preferences-based decision aids, for example, the DCIDA has been used to improve decision making about medications in individuals with suspected knee osteoarthritis, by personalising the information to align with the patient’s preferences [67]. In addition to decision aids, individual-level preference data can be retrieved with methods such as the threshold technique [68] and the emerging multiple dimensional thresholding [69], as well as the Online Elicitation of Personal Utility Functions tool [70, 71]. These real-time results can be used to engage physicians to respond to the patient’s real world and immediate wants and needs.
6 Conclusions
In this paper, the authors explored how technological advancements can and have enabled survey researchers to collect and analyse data more easily, particularly in stated preference research. Although the evidence for the technologies is evolving, the paper summarised possible technologies that we consider promising to improve the outcome of preference research, resulting in more robust estimates. The inclusion of videos and animations in survey instruments, the use of virtual reality, the use of AI in both design and analysis, and the inclusion of real-time results in decision aids offer exciting possibilities for DCE researchers.
While comparisons of traditional and new approaches appear to be in favour of technology, this is an emerging area that makes drawing substantive conclusions challenging. We recognise some technologies have and will be widely adopted, while the use of others is constrained by practical challenges such as the cost, size or portability of the equipment or the need for specialist skills. Innovation is continuously driving the technology frontier forward and we optimistically anticipate that costs will diminish, and new developments will help support preference researchers to conduct high-quality studies. To address these challenges and doubts, the inclusion of technological advances should be achieved gradually and carefully, by designing studies with split samples in which one group is assigned a survey with one advanced feature and one group is used as control with a traditional survey instrument. No more than one advanced feature should be included for each subgroup, allowing the effect to be studied. Results should be examined, and researchers should decide whether the impact is reducing or increasing bias and whether it is beneficial in answering their research questions.
In the meantime, we encourage preference researchers to think critically about the need and implications of using new technology. Although some technologies appear to be worth the investment (of time and money), improvements in data quality may be achieved through other means (e.g. increased incentives, qualitative research). Additionally, attempting to address and suppress one bias may inadvertently introduce others. For example, in seeking to reduce information bias by providing easily digestible information about a health intervention, researchers risk “over-educating” the sample and reducing the external validity of the results. Likewise, in making surveys more accessible for one population, we may reduce the appeal for others. Finally, as health preference studies are most often administered to patient samples [1], any shift in how surveys are designed and administered should consider the accessibility and functionality of the technology in the respective disease area.
References
Soekhai V, de Bekker-Grob EW, Ellis AR, Vass CM. Discrete choice experiments in health economics: past, present and future. Pharmacoeconomics. 2019;37:201–26.
Mahieu P-A, Andersson H, Beaumais O, Crastes dit Sourd R, Hess S, Wolff F-C. Stated preferences: a unique database composed of 1657 recent published articles in journals related to agriculture, environment, or health. Rev Agric Food Environ Stud. 2017;98:201–20.
Ryan M, Gerard K. Using discrete choice experiments to value health care programmes: current practice and future research reflections. Appl Health Econ Health Policy. 2003;2:55–64.
Loomis J. What’s to know about hypothetical bias in stated preference valuation studies? J Econ Surv. 2011;25:363–70.
Lindhjem H, Navrud S. Using internet in stated preference surveys: a review and comparison of survey modes. Int Rev Environ Resour Econ. 2011;5:309–51.
Drost E. Validity and reliability in social science research. Educ Res Perspect. 2011;38:105–23.
Sculpher MJ, Pang FS, Manca A, Drummond MF, Golder S, Urdahl H, et al. Generalisability in economic evaluation studies in healthcare: a review and case studies. Health Technol Assess (Rockv). 2004;8:iii–117.
Vass C, Boeri M, Karim S, Marshall D, Craig B, Ho KA, et al. Accounting for preference heterogeneity in discrete-choice experiments: an ISPOR Special Interest Group report. Value Health. 2022;25:685–94.
Train K. Discrete choice methods with simulation. 2nd ed. Cambridge: Cambridge University Press; 2009. https://doi.org/10.1017/CBO9780511805271
Hole AR. DCREATE: stata module to create efficient designs for discrete choice experiments. Stat Softw Compon. 2015;S458059:1–22.
Gu Y, Hole AR, Knox S. Fitting the generalized multinomial logit model in Stata. Stata J. 2013;13:382–97.
van Cranenburgh S, Collins AT. New software tools for creating stated choice experimental designs efficient for regret minimisation and utility maximisation decision rules. J Choice Model. 2019;31:104–23.
Karim S, Craig BM, Vass C, Groothuis-Oudshoorn CGM. Current practices for accounting for preference heterogeneity in health-related discrete choice experiments: a systematic review. Pharmacoeconomics. 2022;40:943–56.
Evans JR, Mathur A. The value of online surveys: a look back and a look ahead. Internet Res. 2018;28:854–87.
Lefever S, Dal M, Matthíasdóttir Á. Online data collection in academic research: advantages and limitations. Br J Educ Technol. 2007;38:574–82.
Forbes H, Oprescu FI, Downer T, Phillips NM, McTier L, Lord B, et al. Use of videos to support teaching and learning of clinical skills in nursing education: a review. Nurse Educ Today. 2016;42:53–6.
Charvin M, Launoy G, Berchi C. The effect of information on prostate cancer screening decision process: a discrete choice experiment. BMC Health Serv Res. 2020;20:467.
Lim SL, Yang JC, Ehrisman J, Havrilesky LJ, Reed SD. Are videos or text better for describing attributes in stated-preference surveys? Patient. 2020;13:401–8.
Smith IP, Whichello CL, de Bekker-Grob EW, van Mölken MPMHR, Veldwijk J, de Wit GA. The impact of video-based educational materials with voiceovers on preferences for glucose monitoring technology in patients with diabetes: a randomised study. Patient. 2023;16:223–37.
Clark R. Learning from serious games? Arguments, evidence, and research suggestions. J Gaming Virtual Worlds. 2007;47:56–9.
Vass CM, Davison NJ, Vander Stichele G, Payne K. A picture is worth a thousand words: the role of survey training materials in stated-preference studies. Patient. 2020;13:163–73.
Vass CM, Boeri M, Poulos C, Turner AJ. Matching and weighting in stated preferences for health care. J Choice Model. 2022;44:100367.
Flogie A, Aberšek B, Aberšek MK, Lanyi CS, Pesek I. Development and evaluation of intelligent serious games for children with learning difficulties: observational study. JMIR Serious Games. 2020;8:e13190.
Graafland M, Schraagen JM, Schijven MP. Systematic review of serious games for medical education and surgical skills training. Br J Surg. 2012;99:1322–30.
Flynn TN, Bilger M, Malhotra C, Finkelstein EA. Are efficient designs used in discrete choice experiments too difficult for some respondents? A case study eliciting preferences for end-of-life care. Pharmacoeconomics. 2016;34:273–84.
Louviere JJ, Islam T, Wasi N, Street D, Burgess L. Designing discrete choice experiments: do optimal designs come at a price? J Consum Res. 2008;35:360–75.
DeShazo JR, Fermo G. Designing choice sets for stated preference methods: the effects of complexity on choice consistency. J Environ Econ Manage. 2002;44:123–43.
Jonker MF, Donkers B, de Bekker-Grob E, Stolk EA. Attribute level overlap (and color coding) can reduce task complexity, improve choice consistency, and decrease the dropout rate in discrete choice experiments. Health Econ. 2019;28:350–63.
Rossetti T, Hurtubia R. An assessment of the ecological validity of immersive videos in stated preference surveys. J Choice Model. 2020;34:100198.
Zhao Y, van den Berg PEW, Ossokina IV, Arentze TA. Comparing self-navigation and video mode in a choice experiment to measure public space preferences. Comput Environ Urban Syst. 2022;95:101828.
Farooq B, Cherchi E, Sobhani A. Virtual immersive reality for stated preference travel behavior experiments: a case study of autonomous vehicles on urban roads. Transp Res Rec. 2018;2672:35–45.
Nuñez Velasco JP, Farah H, van Arem B, Hagenzieker MP. Studying pedestrians’ crossing behavior when interacting with automated vehicles using virtual reality. Transp Res Part F Traffic Psychol Behav. 2019;66:1–14.
Mokas I, Lizin S, Brijs T, Witters N, Malina R. Can immersive virtual reality increase respondents’ certainty in discrete choice experiments? A comparison with traditional presentation formats. J Environ Econ Manage. 2021;109: 102509.
Arellana J, Garzón L, Estrada J, Cantillo V. On the use of virtual immersive reality for discrete choice experiments to modelling pedestrian behaviour. J Choice Model. 2020;37:100251.
Rigby D, Vass CM, Payne K. Opening the ‘black box’: an overview of methods to investigate the decision-making process in choice-based surveys. Patient. 2020;13:31–41.
Vass CM, Boeri M. Mobilising the next generation of stated-preference studies: the association of access device with choice behaviour and data quality. Patient. 2021;14:55–63.
Liebe U, Glenk K, Oehlmann M, Meyerhoff J. Does the use of mobile devices (tablets and smartphones) affect survey quality and choice behaviour in web surveys? J Choice Model. 2015;14:17–31.
Arias López MDP, Ong BA, Borrat Frigola X, Fernández AL, Hicklent RS, Obeles AJT, et al. Digital literacy as a new determinant of health: a scoping review. PLOS Digit Heal. 2023;2:e0000279.
Shields GE, Brown L, Wells A, Capobianco L, Vass C. Utilising patient and public involvement in stated preference research in health: learning from the existing literature and a case study. Patient. 2021;14:399–412.
Marta-Pedroso C, Freitas H, Domingos T. Testing for the survey mode effect on contingent valuation data quality: a case study of web based versus in-person interviews. Ecol Econ. 2007;62:388–98.
Nielsen JS. Use of the internet for willingness-to-pay surveys: a comparison of face-to-face and web-based interviews. Resour Energy Econ. 2011;33:119–29.
Grewenig E, Lergetporer P, Simon L, Werner K, Woessmann L. Can internet surveys represent the entire population? A practitioners’ analysis. Eur J Polit Econ. 2023;78:102382.
Miller CA, Guidry JPD, Dahman B, Thomson MD. A tale of two diverse qualtrics samples: information for online survey researchers. Cancer Epidemiol Biomarkers Prev. 2020;29:731–5.
Shariq S, Cardoso Pinto AM, Budhathoki SS, Miller M, Cro S. Barriers and facilitators to the recruitment of disabled people to clinical trials: a scoping review. Trials. 2024;24:171.
Marshall D, Mansfield C, von Butler L, MacDonald K. Preventing, detecting, and analyzing data from suspected fraudulent respondents in online surveys, with examples from health preference studies. ISPOR. 2023. https://www.ispor.org/conferences-education/event/2023/02/14/default-calendar/how-to-handle-fraudulent-responses-in-health-preference-studies. Accessed 11 Apr 2024.
Gonzalez JM, Grover K, Leblanc TW, Reeve BB. Did a bot eat your homework? An assessment of the potential impact of bad actors in online administration of preference surveys. PLoS ONE. 2023;18: e0287766.
Wang J, Calderon G, Hager ER, Edwards LV, Berry AA, Liu Y, et al. Identifying and preventing fraudulent responses in online public health surveys: lessons learned during the COVID-19 pandemic. PLOS Glob Public Health. 2023;3: e0001452.
Veldwijk J, Marceta SM, Swait JD, Lipman SA, de Bekker-Grob EW. Taking the shortcut: simplifying heuristics in discrete choice experiments. Patient. 2023;16:301–15.
Ryan M, Watson V, Entwistle V. Rationalising the “irrational”: a think aloud study of discrete choice experiment responses. Health Econ. 2009;18:321–36.
Vass C, Rigby D, Payne K. “I was trying to do the maths”: exploring the impact of risk communication in discrete choice experiments. Patient. 2019;12:113–23.
Cooke L, Cuddihy E. Using eye tracking to address limitations in think-aloud protocol. IEEE Int Prof Commun Conf. 2005;653–8.
Ryan M, Krucien N, Hermens F. The eyes have it: Using eye tracking to inform information processing strategies in multi-attributes choices. Health Econ (UK). 2018;27:709–21.
Krucien N, Ryan M, Hermens F. Visual attention in multi-attributes choices: what can eye-tracking tell us? J Econ Behav Organ. 2017;135:251–67.
Genie MG, Ryan M, Krucien N. Keeping an eye on cost: what can eye tracking tell us about attention to cost information in discrete choice experiments? Health Econ (UK). 2023;32:1101–19.
Vass C, Rigby D, Tate K, Stewart A, Payne K. An exploratory application of eye-tracking methods in a discrete choice experiment. Med Decis Mak. 2018;38:658–72.
Khushaba RN, Greenacre L, Kodagoda S, Louviere J, Burke S, Dissanayake G. Choice modeling and the brain: a study on the electroencephalogram (EEG) of preferences. Expert Syst Appl. 2012;39:12378–88.
Montague R, Harvey A. Using fMRI to study valuation and choice. Adv Brain Neuroimaging Top Health Dis Methods Appl. 2014. https://doi.org/10.5772/58257.
Vass C, Rigby D, Payne K. Watching ME watching you: understanding the effect of experimental setting on stated preferences. Value Health. 2020;23:S476–7.
Ali A, Kalatian A, Choudhury CF. Comparing and contrasting choice model and machine learning techniques in the context of vehicle ownership decisions. Transp Res Part A Policy Pract. 2023;173:103727.
Hensher DA, Ton TT. A comparison of the predictive potential of artificial neural networks and nested logit models for commuter mode choice. Transp Res Part E Logist Transp Rev. 2000;36:155–72.
van Cranenburgh S, Wang S, Vij A, Pereira F, Walker J. Choice modelling in the age of machine learning: discussion paper. J Choice Model. 2022;42:100340.
Hillel T, Bierlaire M, Elshafie MZEB, Jin Y. A systematic review of machine learning classification methodologies for modelling passenger mode choice. J Choice Model. 2021;38:100221. https://doi.org/10.17863/CAM.52743.
Snaman JM, Helton G, Holder RL, Wittenberg E, Revette A, Tulsky JA, et al. MyPref: pilot study of a novel communication and decision-making tool for adolescents and young adults with advanced cancer. Support Care Cancer. 2021;29:2983–92.
Cole A, Khasawneh A, Adapa K, Mazur L, Richardson DR. Development of an electronic healthcare tool to elicit patient preferences in older adults diagnosed with hematologic malignancies. Lect Notes Comput Sci. 2022;210–28.
Witteman HO, Ndjaboue R, Vaisson G, Dansokho SC, Arnold B, Bridges JFP, et al. Clarifying values: an updated and expanded systematic review and meta-analysis. Med Decis Making. 2021;41:801–20.
Munro S, Stacey D, Lewis KB, Bansback N. Choosing treatment and screening options congruent with values: do decision aids help? Sub-analysis of a systematic review. Patient Educ Couns. 2016;99:491–500.
Bansback N, Marra C, Cibere J. Improving decision-making about medications in individuals with suspected knee osteoarthritis using a web application. J Rheumatol. 2015;42:1327.
Hauber B, Coulter J. Using the threshold technique to elicit patient preferences: an introduction to the method and an overview of existing empirical applications. Appl Health Econ Health Policy. 2020;18:31–46.
Heidenreich S, Trapali M, Krucien N, Tervonen T, Phillips-Beyer A. Two methods, one story? Comparing results of a choice experiment and multidimensional thresholding from a clinician preference study in aneurysmal subarachnoid hemorrhage. Value Health. 2024;27:61–9.
Schneider P, Brazier J, Devlin N, van Hout B. The EQ-5D-5L OPUF survey: quantifying health priorities on the individual person level using compositional preference elicitation techniques. 2022;7:A32.1–A32.2. https://doi.org/10.1136/bmjgh-2022-ISPH.88.
Schneider PP, van Hout B, Heisen M, Brazier J, Devlin N. The Online Elicitation of Personal Utility Functions (OPUF) tool: a new method for valuing health states. Wellcome Open Res. 2022;7:14.
Wright SJ, Vass CM, Ulph F, Payne K. Understanding the impact of different modes of information provision on preferences for a newborn bloodspot screening program in the United Kingdom. MDM Policy Pract. 2024;9:23814683241232936.
Boye K, Ross M, Mody R, Konig M, Gelhorn H. Patients’ preferences for once-daily oral versus once-weekly injectable diabetes medications: the REVISE study. Diabetes Obes Metab. 2021;23:508–19.
Gelhorn H, Ross MM, Kansal AR, Fung ET, Seiden MV, Krucien N, et al. Patient Preferences for Multi-Cancer Early Detection (MCED) screening tests. Patient. 2023;16:43–56.
Oliveri S, Lanzoni L, Veldwijk J, de Wit GA, Petrocchi S, Janssens R, et al. Balancing benefits and risks in lung cancer therapies: patient preferences for lung cancer treatment alternatives. Front Psychol. 2023;14:1062830.
Seo J, Tervonen T, Ueda K, Zhang D, Danno D, Tockhorn-Heidenreich A. Discrete choice experiment to understand Japanese patients’ and physicians’ preferences for preventive treatments for migraine. Neurol Ther. 2023;12:651–68.
Acknowledgements
The authors thank the audience of the ISPOR webinar on this topic for their feedback and questions. They also thank Ludwig von Butler for his helpful contributions during the webinar and for the helpful feedback on early ideas and drafts.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Funding
This research received no specific funding.
Conflicts of interest/competing interests
Caroline Vass, Marco Boeri, Gemma Shields and Jaein Seo have no conflicts of interest that are directly relevant to the content of this article.
Ethics approval
Not applicable.
Consent to participate
Not applicable.
Consent for publication
Not applicable.
Availability of data and material
Not applicable.
Code availability
Not applicable.
Authors’ contributions
All authors contributed to the conception and design of the paper. The first draft of the manuscript was written by all authors and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Vass, C., Boeri, M., Shields, G. et al. Making Use of Technology to Improve Stated Preference Studies. Patient 17, 483–491 (2024). https://doi.org/10.1007/s40271-024-00693-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40271-024-00693-8