Introduction

While crime rates in most Western societies have declined in recent years [13], cybercrime rates have skyrocketed [4]. The highest estimates of costs associated with cybercrimes (including profits made by offenders) often approach $1000 billion, an unverifiable number that would rival or even exceed the sums generated by drug trafficking [5, 6]. A multidisciplinary team of British researchers recently suggested the much more conservative but realistic cost of $67.5 billion, carefully extrapolated from national and international data, which includes direct and indirect costs associated with cybercrime [7]. In Canada, results from the 2009 victimization survey suggest that cybercrime constitutes 29.5 % of property crimes [8, 9]. More recent data from the Crime Survey for England and Wales paint an even bleaker picture, suggesting that there were 5.1 million incidents of cyber fraud and an additional 2.5 million cases of computer misuse in the year ending in June 2015 ([10]: 19). In comparison, the Crime Survey estimates that during the same period the total number of offenses against households and adults in the country reached 6.5 million. If cybercrime statistics were added to that data, the volume of crimes would double overnight. The majority of cyber fraud and other cybercrime incidents are, however, never reported to the police, who have very limited capacities to deal with this global technological crime wave.

Despite the huge discrepancy between the level of cybercrime and the availability of specialized investigative resources, police organizations still regularly arrest groups of malicious hackers and scammers, providing anecdotal evidence of the inherently transnational nature of online offending. For example, in February 2008 the Quebec provincial police (the Sûreté du Québec) dismantled a network of 10 local hackers who controlled more than 630,000 computers in 70 countries [11, 12]. That same year, the United States Secret Service arrested Albert Gonzalez and four accomplices for stealing 170 million credit card numbers and then selling them in underground forums to notorious fraudsters in Germany, Belarus, China, and Estonia [13]. In October 2010, at the request of the Dutch police, Armenian investigators arrested a Russian citizen who controlled the Bredolab botnet, which involved more than three million computers [14]. In December 2012, the FBI made 10 arrests in connection with the Butterfly botnet, which had successfully infected more than 11 million machines [15]. And in May 2014, American authorities ordered a massive operation involving 19 countries that led to the arrest of more than 90 people implicated in the design and use of Blackshades malware [16]. Most of these heavily publicized operations were related to the creation, management, or use of a botnet, a network of computers controlled by a malicious hacker who uses them—without their owner’s knowledge—to commit a broad range of crimes. Because of their versatility and scalability, botnets provide the core infrastructure of many cybercrime operations.

In addition to the traditional criminal enforcement approach, which is poorly equipped to deal with complex forms of transnational digital crimes and has to rely on identifying, neutralizing, and punishing a few reckless hackers, two other polycentric approaches – civil remedies and nodal regulatory methods – are emerging as ways to counter the threat botnets pose to a healthy digital ecosystem [1719]. In contrast to the punitive approach of the criminal justice system, the two polycentric approaches described in this article are based on partnerships between public and private actors that are intended to reinforce the resilience of the digital ecosystem by disrupting the technical infrastructure that enables malicious botnets and by helping infected victims clean their computer equipment. These unconventional crime management approaches offer an innovative response to a new breed of property crimes whose characteristics (high-volume and low-impact) are incompatible with a law enforcement system created to process low-volume high-impact crimes, such as aggravated assaults and homicides.

The first section of this article provides a more detailed description of what botnets are and how they can be used to support a broad range of criminal activities. The second section focuses on the modest effectiveness of police interventions that rely on criminal enforcement. The third and fourth sections examine two polycentric anti-botnet strategies, focusing first on Microsoft’s efforts to take down the servers of the most prolific offenders and then on five national anti-botnet initiatives that rely on harm reduction principles put into operation by telecommunication operators.

Botnets: the new cybercrime infrastructure

Botnets are often defined as networks of computers infected by malicious software (‘bot’ being the abbreviation for robot) that allows a criminal, or ‘botmaster’, to simultaneously control thousands, if not millions, of machines [20]. The structure of a botnet has been described as “compulsory military service for Windows boxes” [21], although, since that comment was made, botnets have also infiltrated Apple’s and Google’s operating systems. While it is notoriously difficult to reliably measure the size of a botnet [22], which sometimes leads investigators to overestimate the impact of their interventions, no one disputes that they have, to a large extent, enabled the automation and industrialisation of cybercrime.

There are five main stages in the development of a botnet. First, the botmaster has to develop the malicious software that allows it to communicate covertly with the infected machines. These exchanges have to be bidirectional so that various instructions can be sent to the ‘zombies’ (bots), making them aware of their new operational status. The malware also has to be able to locate and extract several kinds of information from the infected machines, such as documents containing specific keywords, email credentials, or passwords that allow access to bank accounts or other online services. Botmasters who lack programming skills can acquire a fully functional application from underground forums [23]. The prices for these programs vary based on their capability and the technical support offered by their designers. For example, in March 2010, Zeus—a malware specifically designed to target financial information—sold for between $3000 and $19,000, depending on the options included [24]. One of these options makes it possible to remove competitors’ malware from an infected machine, giving the hacker complete control over the computer in order to maximize profitability.

During the second stage, hackers distribute their malware to as many computers as possible. There are several ways of doing this. For instance, hackers can launch a phishing campaign by sending out millions of emails inviting recipients to click a false link that is usually associated with an urgent requirement (such as the need to change a password to maintain an online account). Hackers can also add malicious code to poorly protected websites, which will then infect users who visit those sites, as the New York Times, the BBC, AOL or Jamie Oliver experienced in recent years. Hackers can even hire brokers to install their applications on infected or non-infected machines, with the fee based on the geographic region and number of computers infected [25].

During the third stage, hackers take control of the infected machines and integrate them into their command infrastructure. Because of the size of a botnet –– sometimes several tens of thousands of computers if not more – specific communication protocols must be implemented to allow tasks to be coordinated and distributed so that hackers do not need to manage each infected computer independently. Botmasters send instructions to their bots through dedicated servers, known as C&C (command and control) servers, or through online chat services, known as IRC channels, to which bots regularly connect. At this stage, the challenge for the botmaster consists in successfully taking control of the computers without the victims or their Internet Service Providers (ISP) noticing, since detection would result in corrective action and the loss of a potentially profitable machine. In order to evade discovery, the most adept botmasters use advanced encryption methods. Those who don’t have the competence level to implement encryption can always purchase those services on specialized forums, but such off the shelf solutions are more common and therefore more exposed to the scrutiny of antivirus software.

During the fourth stage, hackers activate their botnet and begin to collect a financial profit or carry out some other action, such as launching an attack to neutralise an adversary. There are five main ways to monetize a botnet [26], although they are probably not the only methods, since there is virtually no limit to hackers’ creativity. Botnets are particularly well adapted to carry out distributed denial-of-service (DDoS) attacks, spam, bank fraud, click fraud, and the marketing of illegal proxy services. DDoS attacks use the botnet’s large size to saturate the servers of targeted organisations with fake requests, making them unavailable to legitimate users. These attacks can be commissioned in order to disrupt the commercial activities of business competitors, to support an ideological position – for instance to protest a government or news organisation decision – or to blackmail a company, such as an online casino or a pornography website, whose profitability declines immediately if their services are unavailable. In 2012, renting a botnet controlled by Russian hackers for a DDoS cost between $30 and $70 per hour, with significant discounts of up to 50 % for high volume purchases ([23]: 171). Mass spam campaigns are a second source of potential profit for botmasters. Because botnets make it possible to distribute unwanted messages that appear to come from legitimate users, they are a very effective way to elude email filters and exclusion lists implemented by network administrators. For example, the Grum botnet, which was dismantled in July 2012, sent 18 billion pieces of spam every day from 120,000 infected computers [27]. This operation made its botmaster (who called himself GeRa) $2.7 million in commissions on the sale of 80,000 counterfeit pharmaceutical products over a three-year period [28]. Another study conducted that same year on the economics of spam identified similar profits ($1.9 million in annual commissions) for the three operators of the Rustock botnet [29]. With bank fraud, botnets are used to secretly obtain personal information (usernames and passwords) stored on the hard drives of compromised machines. This allows hackers to bypass security controls implemented by banking institutions, access their victims’ accounts, and empty them through unauthorized transfers. Alternatively, the hackers can sell this information on underground forums. Some botnets that specialise in this type of fraud (such as Zeus or SpyEye) can even add deceptive information to copies of bank web pages visited by victims to encourage them to provide additional financial information, such as credit card and pin numbers [30]. Click fraud, although much less common than bank fraud, is just as lucrative. It uses the most popular form of online advertising, in which advertisers pay site owners a small fee every time a user clicks on the advertiser’s banner and is redirected to their website [31]. This online method is the opposite of traditional print or audiovisual media models, where a fixed price is paid to disseminate a message to a large audience. In 2012, $94.2 billion was spent on online advertising [32]. For click fraud, criminals use botnets to automatically and repeatedly click ads that appear on their partners’ sites, providing them with undue revenue from the advertiser through specialised services managed by Google, Yahoo, or Microsoft [33]. The botnet ZeroAccess, which had control over approximately a million infected machines, was able to generate more than $2.7 million in revenue through click fraud by making 1 cent per click [34]. Finally, hackers can also rent out machines they control to users who want to mask their activities or illegal content, such as child pornography. These illegal proxy services use victims’ computers to elude police surveillance [35].

The fifth and final stage is maintaining control of the network of compromised machines and involves constant surveillance to ensure that the malicious code remains undetectable by the most popular anti-virus software. During this stage, botmasters need to ensure that the procedures for detecting viruses on infected machines continue to appear effective, meaning that their regular updates, which would compromise access, are blocked. Botmasters also have to make sure that their C&C servers have not been publicly listed by security researchers (in the information available on zeustracker.abuse.ch, for example), which would limit the effectiveness of their botnet and, consequently, their profits.

It remains extremely difficult to obtain reliable figures on the prevalence of and damage caused by botnets on a national or international level. However, in a study conducted in the Netherlands, van Eeten et al. [36] estimated the overall infection rate in developed countries at between 5 % and 10 % of all computer equipment, which allowed Anderson et al. [7] to estimate the global cost associated with preventing this form of crime at $24.8 billion, a large part ($20 billion) of which is the expense of disinfecting or protecting threatened machines.

The limitations of the law enforcement response: the Sisyphus syndrome

In Greek mythology, Sisyphus represents the absurdity of futile and hopeless labour [37]. The son of King Aeolus and known for his cunning and deceit, Sisyphus attracted lightning strikes from the gods for his deeds but refused to die and always found a way to return to the world of the living. Zeus (the god, not the botnet) eventually punished him by sentencing him to roll a boulder to the top of a mountain. However, before Sisyphus reached the summit, he could no longer bear the weight of the boulder, which tumbled to the bottom of the mountain, forcing him to start all over again. Although Camus was convinced that Sisyphus could be happy despite the futility of his destiny, this parable is a fair depiction of the powerlessness of today’s police when faced with botmasters.

Police organizations can certainly boast about a few highly publicized botnet takedowns in recent years, in Canada as well as in the United States and Europe. But these well-choreographed arrests did not always produce the expected long-lasting effects. For example, even though the Armenian police ultimately sentenced the hacker who controlled the Bredolab botnet to four years of prison [38], two days after the servers were seized Bredolab was again sending spam and malicious content, this time from Russian servers and with what appeared to be a preference for Spanish victims [39]. The resilience of this botnet can be explained for the most part by the fact that the police left the basic infrastructure (millions of infected machines) intact, although it did attempt to inform victims that their computers had been infected ([40]: 105, 110). It takes only a few hours for opportunistic hackers to regain control of a botnet once its original operator is behind bars. The investigators’ job is all the more complicated because hackers tend to establish redundant C&C servers in numerous countries and only one server needs to remain active for the entire botnet to survive. In the case of Operation Butterfly, the FBI collaborated with police from Bosnia, Croatia, Macedonia, New Zealand, Peru, and Britain. This required both a considerable effort to coordinate investigators and resources that are unavailable to most Western police organizations, let alone law enforcement agencies in emerging countries that have far more urgent security challenges to address.

In addition to botnet resilience to police intervention, the availability of highly sophisticated stealth techniques, such as communications encryption or polymorphic code that is not easily detected by anti-virus software, leads some observers to believe that most of those who end up in court are probably only novice or intermediate level hackers [41]. This observation was partially confirmed in the Basique trials where, despite the damage they had caused due to the large number of infected computers, only one of the ten accused hackers seemed to have advanced technical skills and none of them had benefited financially from their bots [12]. The sentences handed out in such cases are usually relatively light, based on the age of the accused, the non-violent nature of the crime, and the fact that it is often a first offense, as well as the high potential that the hackers can be reintegrated into today’s thriving digital economy [42]. However, and this is a crucial point, the criminal justice system seems unable to establish notification and disinfection strategies for the thousands, if not millions, of infected computers worldwide. While those who are responsible for attacks are sometimes punished, their victims remain vulnerable due to the inability of police and judicial authorities to adopt intervention methods for extremely high volume crimes that have limited individual impact (in contrast to crimes against people, for example, which are low volume but high impact). A notable exception is the Coreflood takedown carried out in April 2011 by the FBI and the US Department of Justice (DOJ). Bredolab Coreflood, a botnet of more than two million infected machines, is believed to have begun in 2001 and was used to steal personal financial information [43]. The seizure of the C&C servers was done in conjunction with a sinkholing process, in which traffic between the original C&C server and its bots was redirected to a server controlled by law enforcement authorities. Having obtained the proper authorization from a federal court, the Department of Justice set up a substitute server designed to receive incoming communications from machines compromised by Coreflood and direct them to uninstall the malicious software. The FBI claims that “hundreds of thousands” of computers representing 95 % of all the infected machines were cleaned through this approach [44]. It seems ironic that the main US criminal law enforcement organization had to resort to civil action to achieve its desired result ([18]: 244).

Traditional enforcement alone is obviously of limited effectiveness in dealing with some digital crimes, such as botnets. Two alternative strategies have emerged over the past few years to fill this security gap, both of which rely heavily on private actors. The first, initiated by Microsoft, makes use of the civil action approach described above and uses the company’s considerable financial and technical resources to dismantle the most aggressive botnets, with no or only limited police involvement. The second strategy, deployed in half a dozen countries, also relies on the private sector but adopts a more inclusive regulatory philosophy. Regulation (as an activity or instrument, not a field of study) is defined as “the sustained and focused attempt to alter the behaviour of others according to defined standards or purposes with the intention of producing a broadly identified outcome or outcomes, which may involve mechanisms of standard-setting, information-gathering and behaviour modification” ([45]: 26) and is more often used by lawyers, economists, political scientists and sociologists than by the police. Far from being limited to mandatory rules by which the state implements its decisions and obtains compliance, regulation is now understood as the many mechanisms, instruments, and institutions that permit indirect control of certain sectors of human activity [46, 47]. Regulation is therefore as concerned with persuasion and incentives as it is with deterrence and punishment, even if the threat of coercive measures remains central to its objectives when self or delegated regulatory approaches fail [48].

Taking down botnets one court order at a time: the Microsoft strategy

As one of the largest corporations arising from the new digital economy, and arguably the leader among software makers, Microsoft depends on the trust of consumers to keep its revenues growing at a healthy pace. In 2003, as part of a new marketing strategy that made security, privacy, and trustworthiness its business priorities, Microsoft created the Internet Safety and Enforcement Team (ISET), which brings together technical and legal experts to fight cybercrime ([49, 50]: 6). ISET was rebranded as the Digital Crimes Unit (DCU) in 2008 and in 2013 Microsoft opened a Cybercrime Centre at its Redmond headquarters to showcase its technical tools to customers and industry partners. In February 2015 satellite centres were launched in Singapore, Beijing, Berlin, Tokyo, and Washington, DC. Microsoft is using this global footprint and the substantial resources associated with it to operate an ambitious anti-botnet program that substitutes the extensive powers wielded by the civil courts for the law enforcement tools it cannot command. In Microsoft’s assistant general counsel’s own words, “civil litigation remedies, including injunctions, are appropriate and effective tools for stopping the harms caused by those who use criminal botnets to violate commercial and intellectual property laws” ([49]: 2).

Over four years (2010–14), Microsoft was responsible for at least nine botnet takedowns, alone or in partnership (summarised in Table 1). The court orders requested in these cases were based on the harm caused to the company by its customers’ loss of confidence in the brand as well as costs incurred because of botnet activities, such as the excess traffic that spam generates on its free email service or the security features that must be incorporated into its operating systems to protect them against malware infections ([18]: 247). When granted, the orders usually followed a similar pattern: first, they allowed the company to seize control of the domain names and C&C servers that enabled the coordination of large botnets. These domain names and hosted servers are frequently managed by registrars, hosting companies, and data centres that operate legitimate businesses and are unaware of the ultimate uses of some of the services they provide to unfamiliar customers. Such businesses are much more likely to comply with court orders (especially when these are served by US law enforcement agents) than ‘bulletproof hosts’ based in unresponsive jurisdictions whose business model relies on guarantees that they will ignore such legal requests [51]. Once domain names and C&C servers were under Microsoft’s authority, a sinkholing strategy was initiated, where communications between compromised bots and their C&C servers were redirected toward machines controlled by the company, severing the ties between infected computers and their botmasters and allowing for further research into the capacities and activities of the disrupted botnet, as well as making it possible to notify victims ([49]: 6; [43]: 752). In some rare cases (the Citadel botnet, for example), Microsoft was also given authorization to remotely clean infected machines, but this more intrusive approach can have unintended effects that can range from making the victims’ computers unstable to damaging the data they hold ([43]: 767). The legal liabilities that are potentially generated by such interventions, even if they have occasionally been authorized by lower courts, explain why they are rarely used.

Table 1 List of Microsoft botnet takedowns

Despite the promising outcomes of Microsoft takedowns, enthusiasm for this unilateral strategy has been tempered by a number of criticisms that point to the long-term ineffectiveness of such takedowns, the collateral damage sometimes inflicted due to deficient coordination or planning, and the perceived lack of judicial oversight that derives from recourse to civil remedies. On a technical level, even successful takedowns seem to result in improvements that are limited in scope and time. Unless botmasters are arrested and their infrastructure completely and permanently dismantled, residual resources and contingency plans, such as the use of peer-to-peer communication protocols that bypass C&C servers as well as other elaborate defensive measures, allow them to evade sinkholing and quickly regain control over large herds of compromised machines [52]. The Kelihos takedown is a case in point: after the initial Microsoft effort, a new version of the botnet was found online within weeks and had enslaved more than twice as many machines. It was taken down in February 2012 by Kaspersky Lab in collaboration with two other companies (Crowdstrike and Dell SecureWorks). However, it took only 20 min for the bot-herders to bring the botnet back to life. Crowdstrike independently attacked this third version again in February 2013 [53]. Unsurprisingly, the malware was still very active in 2014, this time targeting gullible patriotic Russians [54]. Nadji and Antonakakis [55] estimate that these resilient features make it increasingly difficult to take an entire botnet down at once. According to the authors, the ZeroAccess operation (one of the most recent ones) disrupted only 38 % of the botnet’s infrastructure and its impact was probably much more modest than expected.

Beyond the issues of effectiveness and durability of outcomes, a lack of planning or coordination can result in collateral damage that may offset the intended benefits of takedowns. If undertaken in an ad hoc manner, such operations may, for example, interfere with the monitoring and intelligence gathering activities of other security companies and non-profit organizations, which then lose access to valuable information and see their efforts to fight botnets undermined ([18]: 251). With the Citadel takedown, a prominent security commentator estimates that one-fourth of the domains seized by Microsoft were being used for monitoring purposes and did not pose any threat [56]. When the information leading to a botnet takedown is imprecise or inaccurate, legitimate uninfected users may also be caught in a wide and indiscriminate web of technical disruptions created by the seizure of the servers on which they depend to maintain their online presence. For example, the operation targeting the Bladabindi and Jenxcus botnets in June 2014 resulted in the seizure of 23 popular domain names from No-IP, a hosting company that offers a dynamic DNS service to its customers. The expected outcome was that 18,000 nodes would be severed from botnets, but a lack of prior communication between Microsoft’s legal team and No-IP and an insufficient understanding of the latter’s technical infrastructure resulted in the unintended effect of five million harmless websites being disconnected from the Internet [57]. Although the situation was resolved a few days later and Microsoft readily admitted that this particular operation had overreached, the negative impact, although generated by an honest mistake, appears disproportionate to the stated objectives.

Finally, some authors have expressed concern about the lack of transparency and legal oversight associated with these takedowns. To maintain an element of surprise, Microsoft requests that the court orders it obtains be filed under seal, meaning that they are not public and are therefore harder to challenge by innocent (and by guilty) third parties ([52]: 131). The asymmetry between the claimant’s privileged access to the courts and the defendants’ weaker position is furthered by the recourse to ex parte hearings, where legal actions can be approved by a judge without immediately notifying the other party [58]. This extraordinary procedure serves Microsoft (and those damaged or intended to be damaged by botnets) well but does not contribute to high levels of accountability. In fact, a technical paper by Nadji et al. [52] identified compliance issues for Microsoft, which appears to have started sinkholing the Kelihos botnet before the date specified in the court order. Although this situation may have originated from a lack of internal coordination between the firm’s technical and legal teams, it highlights the structural mismatch between highly resourced transnational corporations and national court systems that lack the capacity to supervise the results of technically complex decisions, whose ramifications are not always well or fully understood.

Although industry-led takedowns enabled by civil remedies demonstrate the untapped potential of private forms of techno-legal interventions to disrupt botnets [59], particularly when contrasted with the modest outcomes of traditional enforcement approaches, they are controversial, not only because their effectiveness seems short-lived but also due to their unilateral nature in a complex and tightly linked technical ecosystem where miscalculations can have rapidly cascading effects. By focusing on bot-herders (who often remain anonymous and are rarely arrested) and a handful of facilitating companies (registrars and hosting services), Microsoft perpetuates the deterrence and incapacitation philosophy of the criminal justice approach. In doing so, it fails to consider the more diffuse responsibility of a broad range of industry players—itself included—in the botnet epidemic and the potential of harm reduction strategies that could be implemented under a polycentric regulatory model to address it. The following section examines five anti-botnet initiatives that follow such principles.

The polycentric regulation of botnets: five national initiatives

Over the past ten years, five countries (Australia, South Korea, Japan, Germany, and the Netherlands) have adopted or explored a multilateral regulatory approach to fighting botnets. Additionally, Ireland outsourced the implementation of its anti-botnet platform to Germany in 2011, in an effort to minimise costs [60], while an initiative jointly announced by the US Government and the telecommunications industry in 2012 (a voluntary anti-bot code of conduct) remained shy of offering a truly coordinated response. The countries that developed anti-botnet approaches have placed ISPs and anti-virus companies, not the police or a single multinational corporation, at the core of their harm reduction strategies. ISPs hold a special place in the digital ecosystem because they exercise a virtual monopoly over the technical infrastructure that lets data flow over the Internet. All communications between infected computers and botmasters are routed through their systems, which they routinely and thoroughly monitor to ensure performance. This central technical role makes them ‘fulcrum institutions’ that occupy influential positions in governance networks and whose actions directly and indirectly affect the entire ecosystem to which they belong [61]. To illustrate, van Eeten et al. [62] discovered that more than half of the world’s spam was sent from compromised computers connected to the Internet through 50 major ISPs, which means that a small number of companies are involuntarily facilitating the growth of the botnet economy. Changing the botnet prevention, detection, and mitigation methods used by ISPs could therefore have a rapid cascade effect and provide sustained improvements to user security.

ISPs are clearly well aware of the problems botnets pose to users. These organisations have three main security models for dealing with this problem: internal, external, and hybrid [63]. In the internal model, ISPs take action to identify botnets and disrupt their activities without involving users, for example by blocking certain communication channels. In the external model, ISPs provide their clients with expert advice and offer discounts on anti-virus products supplied by security partners. In the hybrid model, policies are created that users must follow to help prevent malicious traffic. One of the major obstacles with these three models stems from the lack of direct financial incentive, which means that some ISPs make only the minimal effort required, since botnets do not negatively impact their profitability. The five initiatives described below overcome this incentive problem by embedding ISPs in a self-regulatory web of collaborations that promotes collective action.

In 2005, Australia and South Korea were the first countries to implement this type of initiative. Japan followed suit in 2006. For an in-depth analysis of the characteristics of these five programs, see Dupont [64]. Table 2 summarizes their most significant features.

Table 2 Features of the five anti-botnet initiatives

Although variations in these programs make them distinct, they are governed by similar guidelines, in which partnership plays a key role. These partnerships are often implemented by public Internet regulatory agencies attached to economic development and telecommunications ministries rather than to justice or public safety ministries and bring together private entities that are usually in competition with each other for market share. ISPs play a central role in the fight against botnets but the companies that develop anti-virus software also contribute a great deal to detecting and cleaning infected computers.

Anti-botnet programs usually follow the same operational pattern: first, ISPs and telecommunication regulatory agencies develop an information sharing system that allows them to aggregate botnet data.Footnote 1 They use their privileged status to monitor Internet traffic, identify suspicious data flows, and create a regularly updated list of infected machines. This list makes it possible to inform each participating ISP of the IP addresses (Internet Protocol, the unique identification number for each device connecting to the Internet) of those customers whose machines display suspicious activity. At this stage, ISPs contact their clients by email, traditional mail, or telephone to inform them that their computer is probably infected. Since profit margins in this industry are relatively low, the costs associated with notifying clients and supporting their cleaning efforts could prevent ISPs from acting virtuously. To increase the incentive to participate in these initiatives, anti-botnet partnerships are often awarded public monies that fund the development and maintenance of support tools. These tools include websites, which provide common language explanations of what botnets are as well as step-by-step guidelines on how to remove malware from a computer, or hotlines, which help inexperienced users disinfect their computers, relieving the workload of ISPs technical support teams. Countries that have established partnerships with anti-virus companies also offer victims free downloadable applications that automate the disinfection process and prevent mistakes.

Regularly updated lists of compromised machines also allow ISPs to identify users who are unable or unwilling to rectify the situation. In 2010, 29 % of users in Japan who received notification of a botnet infection made no effort to fix the problem ([64]: 18). Reminders are sent regularly to these noncompliant users, but more stringent measures can also be imposed. In South Korea, the Netherlands, and the United States, ISPs take a tougher approach with uncooperative users, for example by interrupting and restricting Internet access until machines have been disinfected. The digital quarantine imposed on infected machines is directly inspired by medical epidemiological approaches, making ISPs the guardians of a ‘healthy’ digital ecosystem. However, this stricter approach raises a number of legal and ethical issues. As Internet access begins to be seen as an extension of fundamental rights such as freedom of opinion and expression, giving private entities such as ISPs the power to restrict these rights could become a heavily contested issue [65].

Data about the effectiveness of anti-botnet initiatives remains fragmented and subject to interpretation, with only one study having attempted to measure their impact on the security performance of ISPs across 60 countries [66]. The statistical analysis conducted by this group showed a clear correlation between the existence of a national anti-botnet strategy and lower botnet infection rates, but also highlighted strong internal variations between ISPs participating in the same anti-botnet initiative, and the larger impact on overall infection rates of other factors such as the rate of unlicensed software use ([66]: 22). At a more granular level, the results obtained in South Korea, Japan, and Germany suggest a significant decline in the proportion of infected computers after such partnerships were established. The botnet infection rate for South Korean computers dipped from 26 % to 0.5 % between 2005 and 2011 [67, 68], while Japan saw a drop from 2.5 % to 0.6 % during that same period [69]. In Germany, where different metrics were used, the amount of spam sent by botnets shrank by 75 % between September 2010 and May 2011 [70]. As for other countries, the lack of data can be attributed partially to the voluntary nature of the partnerships: since ISPs have no legal obligation to participate in these schemes and retain a great deal of autonomy over the support they provide their infected clients, it is difficult to analyse the overall results of an initiative without differentiating each partner’s respective contribution, which leads to ranking ISPs according to their anti-botnet performance. ISPs may prefer not to get involved in initiatives that provide another layer of comparison in an already highly competitive market. However, under the right conditions, more coercive disclosure practices (which name and shame ineffective or negligent actors) based on public statistics may give regulatory authorities more power to alter the behaviour of actors and thereby increase compliance. This approach is often successfully used in the health care, education, insurance, and environmental protection sectors and could probably be leveraged to improve online security [71, 72].

Voluntary public-private partnerships seem to provide better and more permanent outcomes than highly publicized—but occasional—arrests and takedowns. This harm reduction strategy, however promising, nevertheless faces four major challenges. The first is technical: the advent of the Internet of Things (IoT), which will connect billions of electronic devices through a communications protocol (IPv6) able to accommodate approximately 340 undecillion (sextillion in the UK) IP addresses, will significantly complicate the monitoring of infected machines. Furthermore, since interfaces on many connected devices are minimal and far less open to users’ input than computer operating systems, it will become more complicated to remove malware hosted on them [73]. A second challenge relates to the competitive adaptation of botmasters [74]: offenders may increasingly make use of the notification methods used by anti-botnet programs to lure their victims into downloading what is actually malware. Scammers already use the fear of fictitious infections to entice their victims into paying a fee to download useless anti-virus applications that often damage the users’ machines [75]. Because botnets are inherently ‘manufactured risks’ [76], they can adapt to changing conditions and exploit the trust that is so essential to anti-botnet initiatives. Users will then find it more difficult to distinguish legitimate notifications from fraudulent ones and may prefer to ignore all such notifications. The third obstacle is legal in nature: after Edward Snowden disclosed the existence of the extensive surveillance web to which Internet users are exposed—the NSA being the most advanced incarnation of a larger international trend [77] – convincing them of the harmlessness, let alone the benefits, of a system that constantly monitors their digital data flows and shares that information with government agencies and companies has become increasingly difficult. Germany has implemented a system that protects the privacy of individuals who own infected computers and has compromised by choosing to use a more burdensome and less effective procedure in order to retain the trust of its citizens and reduce their fears of being spied on [78]. For the time being, these regulatory mechanisms are being defined and implemented on a national level; however the problem is fundamentally transnational. Some countries, such as Japan, South Korea, and Germany, try to encourage neighbouring nations to adopt their anti-botnet polycentric harm reduction strategies, but these initiatives are still in their early stages and have not yet caught on internationally, unlike the transnational regulatory methods that are now the norm for air transportation, nuclear energy, and banking activities [79].

Conclusion

This overview of three anti-botnet strategies (criminal enforcement, private disruption, and polycentric harm reduction) and the fragmentary evidence available about their respective effectiveness suggests that unilateral approaches—whether they originate from police organisations or transnational corporations—are of limited use against global risks such as botnets, which threaten the integrity of the digital ecosystem. Although evaluations of the advantages of regulation over public or private incapacitation have not yet been carried out, the anecdotal evidence presented in this article suggests that polycentric approaches inspired by regulatory pluralism should be more frequently considered as a sustainable way to reduce botnet harm and increase Internet resilience. As in many other areas related to cybercrime, reliable statistics and evidence are in short supply, and a comparative economic analysis of the respective costs and benefits associated with the three strategies described in this article would significantly enhance our understanding of their impact on the digital and regulatory ecosystems. The three approaches discussed here are, of course, not mutually exclusive and would probably benefit from tighter integration. In the five anti-botnet initiatives examined in the last section, to my knowledge not a single police investigative unit was able to analyse the vast amounts of data gathered by ISPs in order to provide information that could help guide police operations. Of the three strategies, the takedowns used by Microsoft seemed to indicate the greatest awareness of a need for diversification of approaches. As its experience increased, Microsoft began to enter into more partnerships with police agencies in the US and in Europe, combining private disruption with criminal enforcement, and, once botnets had been dismantled, also began to reach out more often to national CERTs (Computer Emergency Response Teams) and large ISPs to share malware signatures and removal tools in order to consolidate the benefits of its interventions [59].

One important issue raised by the growing number of hybrid operations in which police officers are supported by private companies is the extent to which private interests influence these highly publicized botnet investigations. These operations sometimes rely on a division of labour, with the initial leads and supporting intelligence collected by the private sector and then handed over to the police, which arrests and prosecutes botmasters. Similar arrangements, where the police become the instrument of private interests, have been found frequently in other fields in which the police lack technical expertise [8082]. There are, however, alternative models of public-private cooperation that better serve the public good [83]. One is third party policing, described by Mazerolle and Ransley [84]. Third party policing relies on the coordination of multiple actors who have specific regulatory capacities that help prevent certain types of crime. These configurations make use of the police’s persuasive and coercive powers to spur security networks into action. The police no longer act as a monopoly in the delivery of security but work to build collaborative security networks that involve the most suitable organisations and are coordinated by public institutions. Although this approach seems particularly well suited to dealing with online crime, there is very little research as to how the police are adapting to this new polycentric reality [8587]. Analysis of partnerships, their incentive structures and the constraints under which they operate, such as the one conducted in the UK by Levi and Williams [88], would provide vital information about the police capacity to adapt to the changing criminal landscape introduced by the Digital Revolution.

Modern police institutions, designed initially to maintain order in the new urban environment forged by the Industrial Revolution [89], seem to have difficulty moving beyond this model and finding their place in a transnational polycentric world where security is produced by networks of public, private, and hybrid actors who use a variety of resources to guarantee the integrity of data flows. This slow adaptation is further complicated by the financial reality that defence and intelligence agencies, rather than police services, are the main recipients of public funding to establish and implement cybersecurity policies [77, 90].

Because of their versatility, scalability, and resilience, botnets are one of the most significant threats to the digital ecosystem. They represent a shift into automated offending, exploiting the same opportunities the Networked Information Economy provided for Silicon Valley startups [91]. Three different approaches to controlling them and reducing the harm they cause individuals, organisations, and machines were reviewed here. While the traditional local law enforcement model seems inadequate to deal with such a global problem, innovative polycentric strategies led or implemented by private actors, alone or in collaboration with public regulatory agencies, have resulted in more encouraging outcomes. However, systematic evaluation of the effectiveness and acceptability of these strategies is still unavailable, and there is a limited supply of scientific evidence to help determine the most sustainable approach. This lack stems in part from the fact that the data needed to conduct such evaluations are currently held by private actors (ISPs, software companies, search engines, financial institutions), which are reluctant to make it available to researchers who might come up with different stories than the ones favoured by their public relation departments. As well, disciplinary boundaries often prevent criminologists from working regularly with computer scientists and regulation experts, although pooling their methodological and theoretical capacities would obviously be useful in understanding these transformations and their effect on the governance of online harm. The creativity and ingenuity that spawned the Digital Revolution have been embraced by online offenders but are just beginning to percolate into the regulatory and academic spheres.