Background

The proliferation and fragmentation of media channels in the digital era has both increased and diversified consumers’ exposure to a plethora of overlapping voices. In particular, social media has facilitated the amplification of, and engagement with, traditional media channels such as television. Sometimes referred to as ‘second screen experience’, Twitter discovered that 70% of users share Tweets about television shows that they are watching (Vale 2016).

Today’s consumers are thus often engaged in a range of simultaneous media usage, for example, television viewing whilst surfing the internet and engaging in social media dialogue (Schultz et al. 2009). Furthermore, smart phones, tablets and interactive television have blurred the historical distinction between devices with different functions; the result is a multitude of media channels engaged across several devices reinforcing and interacting with each other. In the UK, price comparison companies for insurance products spend millions of pounds on television advertising yet almost all responses are received online—a further example of the interconnectivity of devices and responses (Robertshaw 2011).

In the digital era, consumers are increasingly engaging with a combination of media channels at any given time across a range of devices, with a high degree of superimposition, referred to as polychromic information processing (Naik and Raman 2003; Hanover Research 2015). Increasingly, the customer decision journey spans across digital as well as traditional offline environments (Kannan and Li 2017). The number of touchpoints is thus increasing each year as traditional offline media channels are complemented with newer digital channels (Bughin 2015).

Media synergy

The digital era has presented opportunities for the marketing community to integrate different media channels to create a synergistic uplift in responses and increase the effectiveness of the marketing effort.

Media synergy has been defined as the combined effect or impact of a number of media activities being different from the sum of their individual effects on individual consumers. Thus, synergy is a phenomenon in which the whole is not always exactly equal to the sum of the parts (Schultz et al. 2012).

Specifically, when individual media channels are strategically linked together, each reinforces and adds value to the next, creating a ‘hothouse’ effect, whereby the sum of the total can exceed the sum of the parts (Bruce 2010). This halo effect has sometimes been illustrated as 1 + 1 + 1 + 1 = 5 (Belch and Belch 1998; Naik and Raman 2003).

Media synergy goes beyond simply a hypothesised effect and has been quantified in several real-life studies. For example, radio advertisements can reinforce imagery created by television commercials, resulting in a synergistic uplift across the two media (Edell and Keller 1999). In another example, research by the Royal Mail in conjunction with BrandScience showed a 62% increase in return on investment (ROI) from digital campaigns when combined with direct mail (Royal Mail 2017).

Combining Facebook ads with email has also been shown to achieve a net uplift. Specifically, when reached with Facebook ads, email openers were 22% more likely to respond (Chaffey 2017).

The existence of media synergy and empirical measures of its effect are therefore well documented in the literature, leading Chaffey (2017) to argue that the digital era should be viewed as a larger, more complex picture where consumers do not use a single channel, but instead engage with multiple online and offline interactions and devices during a customer path. This viewpoint is supported by Kannan and Li’s (2017) discussion of how online channels interact with traditional offline channels to create synergies.

Measuring true media channel performance

Given the multitude of overlapping and interacting media channels, synergising both on and offline channels, amplified by social media and accessible through a wider range of devices, the question thus arises of how the true contribution of each media strand can be measured and the implications for media planning. This is highly relevant since companies spend significant amounts on advertising across an increasingly diverse range of media channels, and the ability to determine the true performance of individual media strands is critical to optimising return on investment.

A number of previous studies have sought to address this problem with varying degrees of success (Naik and Raman 2003; Lee 2011; Schultz et al. 2009, 2012; Joo et al. 2013). For example, Naik and Raman’s early study (2003) developed a theoretical model of the synergistic relationship between television and print advertisements, and studied broader synergy across a range of media channels based on a retail sales case study.

The current study seeks to extend the findings of Naik and Raman (2003) and Schultz et al. (2009, 2012) by proposing an alternative approach to the ‘last-in wins’ methodology using two case studies: a UK insurance company and a leading US university.

Current approaches to measuring media channel performance

Current approaches to measuring the contribution of different media channels to campaign performance are typically based on recording a response against the media channel that precipitated the response. Under this approach, television viewing is measured separately from radio listening, which is measured separately from website visits and so forth. Newer forms of media, such as mobile and social networks, also tend to be measured separately and individually, with disregard for overlapping media consumption of the target audience (Robertshaw 2012).

Such approaches do not take account of the contribution of supporting media channels to the precipitated response. For example, a repeated television advert prompting a latent internet search whereby the ultimate response may be attributed to ‘Google search’ as opposed to ‘television’. In fact, most singular response measures force the respondent to select only one response precipitator (Dietrich 2014). Thus, in the preceding example, the respondent may select ‘television’ or ‘Google search’ but not both. This approach has been referred to as the ‘last-in wins’ methodology (Lee 2011).

In actuality, it would be more accurate to assert that a multitude of reinforcing media precipitated the response, which was then actualised through one final, individual medium. In this respect, the ability to discriminate between the contributions of different media strands is critical in understanding their true value to integrated marketing campaigns.

Media channel response attribution

Conventional methods of determining media channel performance are thus typically single-medium focused, assuming that consumers are exclusively focused on the one medium being measured to the exclusion of all others. This is demonstrably false, as it is already known that a combination of media channels can produce a greater overall benefit than the sum of its parts (Schultz et al. 2009; Royal Mail 2017). Put differently, under the ‘hothouse’ effect of 1 + 1 + 1 + 1 = 5 (Schultz 2006) where is the extra ‘1’ attributed when evaluating media channel performance?

A somewhat ad hoc solution is to average out or proportionately attribute the missing ‘1’ across channels. However, this approach is fundamentally arbitrary in nature since it ignores important correlations that can exist between specific media channels (Vale 2016), for example, television advertising and website visits.

In addition, whilst the use of customised URLs (CURLs) for online media enables the specific tracking of media channel source, this precision is negated when a mixture of traditional and online media is deployed. For example, the use of television advertising to drive website traffic and stimulate social media engagement.

Further, the awareness building characteristics of television are often developed through repeated exposures to a given advert—awareness, interest, desire and action, the ‘AIDA’ model (Priyanka 2013). For example, Manchanda et al. (2006) showed the number of exposures to a banner advert accelerates a purchase. As the visitors browse across more sites, this impact is stronger.

An email or website banner advert may trigger a latent response that was in fact stimulated under the ‘hothouse’ effect by repeated exposure to television advertising. This is an important point since ‘last-in wins’ measures of advertising performance are likely to understate television performance and overstate the effectiveness of banner advertising in this scenario. This situation was illustrated in Li and Kannan’s (2014) study which discovered that the often used last-click attribution or linear weighted attribution often over-estimated search channels and under-estimated referrals, emails and display channels.

Research has also shown spill-over effects between online and offline channels. Examples include Rutz and Bucklin (2011), who found a spill-over effect from paid searches to subsequent direct visits. Joo et al. (2013) found television adverts can promote the volume of Google searches, especially searches on brand keywords.

Media planning models based on the ‘last-in wins’ approach are therefore to some degree inaccurate to the extent that they over or understate the true performance of each media strand. The result is a sub-optimal deployment of advertising spend and reduced return on investment. This situation has always existed in lieu of an alternative approach to the ‘last-in wins’ approach, but has been exacerbated by the rapid growth in overlapping media channels.

An example of measuring media channel performance: the UK insurance industry

The UK has the largest insurance industry in Europe and the third largest in the world. The combined media spend of UK insurance companies runs into tens of millions of pounds annually across a broad media spectra such as television, newspapers, magazines, telephone directories, direct mail, banner adverts, email, Google, Bing, Yahoo, Facebook, various blogs, review sites and a surfeit of smaller channels (Association of British Insurers 2017). Yet an analysis of the websites of some leading names reveals that many do not capture media source information from website visitors, including Churchill, Direct Line, Liverpool Victoria and Endsleigh. In these instances, the media channel that elicited the website response is not directly recorded, for example, a newspaper advert leading to an online visit.

Some insurance companies such as Saga and More Than do capture media source information, but do so using the ‘last-in wins’ approach whereby only one channel can be attributed with the response. Taking More Than as an example, visitors to the website for an insurance quotation are asked to select one option from a drop down box in response to the question ‘How did you hear about More Than?’

  • Confused.com

  • Moneysupermarket.com

  • Comparethemarket.com

  • Magazine Advert

  • TV Advert

  • GoCompare.com

  • Tescocompare

Differences in opportunity to respond across media channels

The problem of establishing the true performance and contribution of a particular medium within an integrated marketing campaign is confounded further by differences in the opportunity of consumers to physically respond at a given time. For example, radio advertising is known to be heavily consumed during travel where the listener has very limited opportunity to respond in situ (Edell and Keller 1999). In contrast, emails are chiefly read by consumers through PCs, laptops and mobile devices where response is easily facilitated, typically through a clickable hyperlink (Kucuk 2011). In this case, radio advertising can build up awareness thereby creating a ‘halo’ effect around other media precipitating a greater response in channels such as direct mail, email and magazines. Critically, however, this synergistic benefit may not be attributed to radio in the recorded results, but instead to the medium through which the final response was precipitated.

Similar situations are observed across other media—for example, newspaper advertising offering a discount for online visitors, an approach commonly used by insurance companies. Although the newspaper advert may stimulate the initial response, this response is often attributed to a search engine (Dietrich 2014).

Some methods have been developed that seek to circumvent this situation, such as comparing a given medium plus radio in one geographical area against the same medium minus radio in another geographical area. However, this simplistic approach again relies on broad approximations due to the inevitable intrusion of other media channels into the equation. This is particularly true when such media are pervasive and extend across geographical boundaries, for example, television and the Internet.

Alternatives to existing media measurement techniques

The ‘last-in wins’ approach to measuring media channel performance remains in widespread use, as illustrated by the UK insurance market, and yet is known to be inaccurate to some extent with implications for media planning and return on investment.

The problem with current approaches to measuring media channel performance has also been echoed in the wider community. For example, Lee (2011) has argued that companies need to recognise that the impact of different marketing methods should be measured in relation to each other. Lee gave the example that, when examining a firm’s display advertising, it may be apparent that it is underperforming in comparison with search marketing. However, this does not mean that the business needs to invest more in search marketing, as the display adverts may be directly influencing the consumer to search for the firm (Dietrich 2014).

Kannan and Li (2017) argue that the proliferation of new technologies and online channels and the spread of marketing investments across these entities have hindered the ability of firms to measure the impact of their marketing investments accurately. Improved attribution methodologies and appropriate data are needed for understanding the individual impact of channels and touchpoints—across offline (for example, television and print) and online boundaries, and across various devices and online channels.

The question thus arises of how existing methods of evaluating media channel performance can be adapted to yield a more accurate picture of the synergistic contribution of each media strand, without being too onerous on respondents. The current study presents one such alternative within an online environment, comparing the alternative and proposed approaches, and considering the implications for media planning.

Case studies of Aviva and the University of Chicago

Both Aviva and the University of Chicago use the ‘last-in wins’ methodology. These two disparate sectors, insurance and education, were chosen as part of the current study to provide cross-verification of the data through triangulation, eliminating the possibility that the findings could be sector-specific and thereby lending support for generalisation of the results.

The objectives were to determine if alternative approaches to the ‘last-in wins’ approach yield significantly different results and if so, if the differences are consistent across sectors.

Aviva’s approach to measuring media channel performance

Aviva plc is the UK’s largest insurer with 31 million customers worldwide (Aviva 2017). Its dominant position within the UK market and high level of consumer awareness serves as a generalised example of how media channel performance is measured within an online environment.

When visiting the Aviva car insurance website (www.aviva.co.uk/car/), respondents are asked to confirm where they have heard of Aviva from its current range of media channels. Importantly, only one option is selectable. The list of options is then presented in alphabetical order. Social media is not presented as an option.

Where did you hear about Aviva?

  • Already a customer

  • Do not know

  • Email

  • Friends or family

  • Newspaper/magazine

  • Phone directory

  • Post

  • Price comparison site

  • Google/Bing/Yahoo

  • TV

  • Website advert

University of Chicago’s approach to measuring media channel performance

The University of Chicago is ranked the 10th best university in the world (University of Chicago 2017) and therefore well-known to prospective students, and employs a broad range of media channels to recruit new students.

When visiting the University of Chicago Medicine website (https://www.uchospitals.edu/contact/request-information.html), prospective students are presented with the following question when requesting information and asked to select one option only.

Where did you hear about us?

  • University of Chicago Physician

  • Other Doctor

  • Nurse or Hospital Employee

  • Internet

  • Social media

  • Friend or Relative

  • Magazine

  • Newspaper

  • Radio

  • Television

  • Flyer or Poster

  • Other

Further examples of US universities and institutions using the ‘last-in wins’ method include California State University, Purdue University, California State University, University of Colorado, University of Hawaii, University of Houston and the University of Wisconsin.

Methodology

In the case of Aviva, a random sample of 100 insurance customers from across the UK was obtained through a private survey company and split into two halves; one half of the random sample served as a control group and the other half as a test group.

For the purposes of this study, the question was amended slightly to read ‘Where did you last hear about Aviva?’ This amendment was necessary given that respondents were not actually on the Aviva website at the point of data collection, and hence a subtly different wording was employed to reflect this without affecting the study’s objectives.

The control group was asked to select only one option from the list of media channels, consistent with the current approach used by Aviva.

The test group was asked exactly the same question as the control group. However, critically, this group of respondents was asked to select all that apply.

The same approach was employed for the University of Chicago, with a random sample of 100 prospective university students aged 18–25 (inclusive) from across the US obtained through a private survey company. Again, the random sample was split into two halves to provide control and test groups.

As with the Aviva approach, the question was amended slightly to read ‘Where did you last hear about the University of Chicago?’ since respondents were not necessarily on the University of Chicago website at the point of data collection.

No demographic data were gathered, though this did not affect the central objective of the study, to determine if alternative approaches to the ‘last-in wins’ approach yield significantly different results.

Results and discussion

Aviva results

Under Aviva’s existing approach to recording media source information, whereby website visitors are asked to select one option from a list of different media channels, television was vastly over-represented in the control group as shown in Table 1. Clearly, this finding did not signify that respondents had only heard of Aviva by television—rather it revealed that respondents were most likely to recall Aviva being advertised on television than through other media channels.

Table 1 Comparison of Aviva singular and multiple media source measurements

By comparison, the results from the test group, in which respondents were asked to select all channels through which they had heard of Aviva, revealed a wider distribution of responses across media channels. Whilst television remained the most prominent channel, it comprised 33% of all responses compared to 86% of all responses in the control group. In summary, the pattern of response distributions in the control group was different to that of the test group, as shown in Table 1.

The null hypothesis that there was no significant difference between the two alternative measures of media channel performance was tested by comparing the difference between the means based on paired observations.

The computed value for this lower-tail test was −4.74, which was lower than the critical value of t = −1.81 (degrees of freedom = 10 and 5% level of significance). Therefore, the null hypothesis was rejected at the 5% level of significance, concluding that the mean level of measurement under the multiple (test) approach was significantly different to that of the extant, singular (control) approach.

In addition to yielding statistically different results, the level of detail observed in the multiple measures of media channel responses was increased: a total of 185 responses compared with 50 in the control group. This reflected a broad interaction of media channels in the consumer response process as illustrated in Figs. 1 and 2.

Fig. 1
figure 1

Distribution of Aviva media channel responses using singular measurement approach

Fig. 2
figure 2

Distribution of Aviva media channel responses using multiple measurement approach

The results also provide a solution to the question of where the extra ‘1’ should be attributed in the illustration of synergy as 1 + 1 + 1 + 1 = 5 (Belch and Belch 1998; Naik and Raman 2003). Comparing the ratio of multiple responses against singular responses given in Table 1 provides the following:

  • Already a customer = 4/4 = 1.00

  • Email = 5/1 = 5.00

  • Price comparison site = 11/2 = 5.50

  • TV = 61/43 = 1.42

Applying these ratios to the extra ‘1’ now allows us to reformulate the illustration as follows:

1.00 + 1.42 + 1.46 + 1.12 = 5.00 thereby providing a more precise method for synergistic response attribution that does not rely on the sweeping assumptions of the singular approach.

Additionally, it can be seen that the synergistic uplift is greatest on email and price comparison sites.

University of Chicago results

Under the University of Chicago’s existing ‘last-in wins’ approach to recording media source information, the majority of respondents reported that they had heard of the university via the Internet.

By comparison, the results from the test group, in which respondents were asked to select all channels through which they had heard of the University of Chicago, revealed a broader distribution of responses across media channels. The Internet remained the most popular channel, comprising 32% of all responses compared to 48% of all responses in the control group. The pattern of response distributions in the control group was found to be different to that of the control group as shown in Table 2.

Table 2 Comparison of University of Chicago singular and multiple media source measurements

The null hypothesis that there was no significant difference between the two alternative measures of media channel source was again tested by comparing the difference between means based on paired observations.

The computed value for this lower-tail test was 3.2, which was greater than the critical value of t = 2.8 (degrees of freedom = 10 and 1% level of significance). Therefore, the null hypothesis was rejected, concluding that the proposed new approach, incorporating synergistic effects, was significantly different to that of the extant, ‘last-in wins’ (control group) approach.

In addition to statistically significant differences in the results, the richness of detail observed in the multiple measures of media channel responses was increased: a total of 117 responses compared with 50 in the control group. The differences are illustrated in Figs. 3 and 4.

Fig. 3
figure 3

Distribution of University of Chicago media channel responses using singular ‘last-in wins’ measurement approach

Fig. 4
figure 4

Distribution of University of Chicago media channel responses using multiple, synergistic measurement approach

Further, social media emerged as an important source of responses, distinct from the generic ‘Internet’ option.

Comparing the multiple responses against singular responses given in Table 2 provides the following ratios:

  • Internet = 38/24 = 1.58

  • Friend or relative = 20/12 = 1.67

  • Newspaper = 5/2 = 2.50

  • Radio = 5/1 = 5.0

  • Television = 10/1 = 10.0

  • Flyer or poster = 7/1 = 7.0

  • Social media = 28/8 = 3.50

  • Already a student at UoC = 1/1 = 1.0

Adapting the illustration of media synergy as 1 + 1 + 1 + 1 = 5 to read 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 10, and applying these ratios allows us to reformulate the illustration as:

1.10 + 1.10 + 1.16 + 1.31 + 1.62 + 1.43 + 1.22 + 1.06 = 10.0

It can be seen from Table 2 that those channels that appear to be contributing the greatest synergistic uplift in response are television and radio.

Comparing these results with those of Aviva it is apparent that different media mixes lead to different synergistic uplifts.

Adopting a multiple media channel measurement approach thus provides the additional benefits of more precise synergistic response attribution, whilst identifying ostensibly underperforming media channels that are actually contributing to the final, precipitated response.

Implications for media planning

A study undertaken by Noel-Levitz (2016) suggested that US colleges and universities typically spend $2232 on advertising per recruited student. Thus, on a total advertising spend of $1 m, an institution could expect to recruit 448 students.

Applying the response distributions from Table 2 for the ‘last-in wins’ and full attribution approaches to the 448 recruited students provides a hypothetical comparison of differences in student cost per acquisition (CPA) by media channel as illustrated in Table 3.

Table 3 Hypothetical comparison of difference in CPA outcomes from using different measures of performance

The findings of this study suggest that media planning based on singular, ‘last-in wins’ measures may lead to inaccurate decision-making. In particular, the erroneous assumption that removal of ostensibly weaker performing media channels with the fewest responses and highest cost-per-response will lead to an improvement in overall campaign performance.

For example, in Table 3 under the ‘last-in wins’ approach, flyers and posters would be removed as cost-ineffective channels (CPA = $5556). However, under the synergistic, full attribution measurement approach investment in flyers and posters would actually be increased (CPA = $1786).

It is apparent, therefore, that synergistic measures of media channel performance lead to different outcomes in the media planning decision-making process. In turn, these differing outcomes have implications for the ultimate performance of student recruitment campaigns and ROI.

The results provide institutions with an alternative approach to measuring the effectiveness of advertising campaigns in recruiting new students; an approach which quantitatively incorporates synergistic effects. This is important since it contributes to the development of media mix models which optimally allocate marketing investments across media channels according to the truer performance of each media strand, which is crucial for improved marketing ROIs.

Conclusion

Today’s consumers are enshrouded by a chorus of media messages from a multitude of sources through which they interact and process information, accentuated by social media with its ability to increase engagement across different platforms and devices.

This situation has challenged the legitimacy of existing approaches to measuring media channel performance based on recording the single channel through which the final response is precipitated—the ‘last-in wins’ approach. It also echoes calls in the wider community for newer approaches that recognise and attribute the role of media synergy in building and precipitating responses. With the growing number of overlapping media channels, the accuracy of measuring the separate performance of each media strand via the ‘last-in wins’ approach has become the subject of debate.

Yet a review of the insurance and education sectors reveals that many either do not directly capture media source information from website visitors, or employ singular media source measurements. In this latter case, the media channel that precipitated the response is considered the ‘winner’ and is credited with the response—effectively disregarding the synergistic contribution of other media channels towards the response.

Disregard of the synergistic contribution is important because the insurance and education sectors spend very significant amounts on advertising each year across a broad and diverse range of media channels, increasingly embracing digital channels and social media where a majority of business is now transacted online, and where accurate calculations of ROI are critical to optimising future media planning and advertising decisions.

In comparing current approaches to measuring media channel performance with that proposed in this study, a significant difference in the outcome of either approach is revealed. This is important on several grounds. Firstly, observed media channel performance is dependent on how it is measured. Secondly, it offers an alternative approach to measuring media channel performance that recognises the contribution of each strand whilst quantitatively incorporating the effects of synergy. Thirdly, it questions the validity of extant media planning models whilst proposing an alternative and testable approach that may improve ROI from integrated marketing campaigns.

Limitations and scope for future research

Further studies in other sectors involving larger sample sizes would lend support for the findings and confirm the extent to which the findings can be generalised.

Particular synergistic correlations may also exist between specific media channels, for example, television and social media. Comparing net uplifts in response rates between different media strands may reveal symbiotic interactions, and augment the range of synergistic opportunities.

The results of this study provide an extension of the range of possibilities for measuring synergistic media channel performance and may lead to improvements in media planning decisions. Testing the singular, ‘last-in wins’ approach against the multiple, full attribution model would provide empirical validation of the efficacy of the proposed new approach in media planning.