1 Introduction

Traditional economic arguments suggest that there is a market failure in the production of new ideas. Innovation is expensive, imitation is cheap, and exclusion is difficult. As a result, imitators will copy new ideas before innovators have a chance to recoup their sunk costs of development. Foreseeing losses, innovators will be deterred from investment, and innovation will be undersupplied. To protect the incentive to innovate, traditional economic analysis recommends that innovators be granted the privilege of excluding competitors from the market. The potential quasi-rent generated by a right to exclude incentivizes production, if not optimally, at least better than would a pure market.

Neither Virginia public choice nor the institutional analysis of the Bloomington school challenge the traditional argument for intellectual property (IP), but each offer important emendations and qualifications. By subjecting politics to the same analysis as markets, the Virginia school reminds us that political “solutions” have their own failings, and some can be worse than market failure. Virginia parallels the traditional arguments of market failure with a theory of government failure, providing a more balanced approach to public policy.

The traditional arguments for IP also treat the commons as a kind of wasteland. Following the traditional logic, the commons inevitably leads to tragedy. In contrast, the Bloomington school demonstrates how commons often create opportunities for mutually beneficial rules under which common pool resources may thrive. The approach to the commons pioneered by the Bloomington school is especially appropriate to understanding the intellectual commons because the intellectual commons is more robust to the overuse and under-maintenance problems that drive the tragedy of other commons.

In what follows, we use the Virginia school to explain why intellectual property law has expanded in recent decades, at the same time drawing on the Bloomington school to shed light on the consequences of that expansion. Both schools give us insights into possible remedies.

2 The Virginia school and the unromantic history of intellectual property

Buchanan (1979a) defined public choice as “politics without romance.” Whereas much of the public finance literature explores optimal public policy as a benevolent and often omniscient social planner would implement it, the Virginia school assumes that politicians, bureaucrats, judges, and voters are no more virtuous or omniscient than anyone else. Consequently, the analysis of policy requires careful study of the incentives and constraints political actors face. When these incentives are taken into account, the policy recommendations of the naïve public finance literature are often found to be outside of our opportunity set. We are limited to the set of policies that are incentive-compatible for all actors.

Alongside Virginia public choice, Buchanan and Tullock (1962) also offer “constitutional economics,” a theory and practice of constitutional design. Buchanan (1979b) and Vanberg and Buchanan (1989) in particular offer hope that, at the “constitutional moment,” an opportunity exists to write rules of the game that, although designed to operate in a post-constitutional world of politics without romance, are produced behind a “veil of uncertainty” by the better angels of our nature. The tension between the romance of the constitutional moment and the politics without romance assumed to rule thereafter is notable. Nevertheless, we do see evidence for such a distinction in the creation and evolution of intellectual property law in the United States, a subject to which we now turn.

2.1 From constitutional moment to politics without romance

Traditionally, the London Stationers’ Company, a guild of printers and booksellers, held a monopoly on printing in England. A book could be printed only by a member of the guild, and once a book was registered by a member, the right to print that book was perpetual and exclusive. In 1710, the Statute of Anne attempted to lift the monopoly by vesting authors, not the guild of printers and booksellers, with original copyright and limiting copyright to 14 years (with the possibility of a one-time renewal for an additional 14 years). The printers and booksellers, however, maintained that the Statute of Anne did not limit, but instead only supplemented, their natural rights in common law to perpetual copyright. In a number of cases, the courts agreed with the guild. However, the House of Lords, acting as the Supreme Court of Great Britain, decisively rejected that perpetual copyright in the 1774 landmark case of Donaldson v. Beckett.

The English debates were well known in the Americas, but aversion to monopoly was even stronger there than it had been in England, and rather than engaging debate between perpetual and limited terms, the debate in the United States was mostly between limited terms and no protection at all. The Articles of Confederation had no provision for copyright, and those states with copyright laws provided for limited terms, with most adopting the same 14/14 terms of the Statute of Anne (Ochoa and Rose 2002).

The draft Constitution authorized Congress to create patents and copyright, but only for “limited times,” only to “authors and inventors,” and only “to promote the progress of science and the useful arts.” In responding to the draft, Jefferson wrote to Madison arguing that this wording was too expansive. Making the case for a Bill of Rights, Jefferson argued for erring in favor of bright-line rules:

It is better to establish trials by jury, the right of Habeas corpus, freedom of the press and freedom of religion in all cases, and to abolish standing armies in time of peace, and Monopolies, in all cases, than not to do it in any. The few cases wherein these things may do evil, cannot be weighed against the multitude wherein the want of them will do evil.… The saying there shall be no monopolies lessens the incitements to ingenuity, which is spurred on by the hope of a monopoly for a limited time, as of 14. years; but the benefit even of limited monopolies is too doubtful to be opposed to that of their general suppression.

Jefferson to Madison, 31 July 1788Footnote 1

Madison responded:

With regard to monopolies they are justly classed among the greatest nuisances in Government. But is it clear that as encouragements to literary works and ingenious discoveries, they are not too valuable to be wholly renounced? Would it not suffice to reserve in all cases a right to the Public to abolish the privilege at a price to be specified in the grant of it? Is there not also infinitely less danger of this abuse in our Governments, than in most others? Monopolies are sacrifices of the many to the few. Where the power is in the few it is natural for them to sacrifice the many to their own partialities and corruptions. Where the power, as with us, is in the many not in the few, the danger can not be very great that the few will be thus favored. It is much more to be dreaded that the few will be unnecessarily sacrificed to the many.

Madison to Jefferson, 17 Oct. 1788Footnote 2

Madison’s letter is of interest for two reasons. First, the brief reference to patent buyouts was never discussed further, but similar ideas have been raised more recently (e.g., Kremer 1998).Footnote 3 Second, Madison argues that with the US Constitution, the power is invested in the many, not in the few. In contrast, the logic of collective action ensconced in the Virginia school formula for political success—concentrate benefits, disperse costs—suggests that even in a democracy significant power may rest with the few (Olson 1965).Footnote 4

Madison’s view, of course, held sway, and article I, Sect. 8, clause 8 of the US Constitution empowers Congress

To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

Nevertheless, we argue that at the constitutional moment, the framers carefully considered the rules of the game and adopted those they foresaw as best promoting the public interest. As early as the Copyright Act of 1790, however, there are signs of reversion to a politics without romance.

2.2 Politics without romance

Almost immediately after the first session of Congress, writers began to petition Congress for protection of their works.Footnote 5 The Copyright Act of 1790 was meant to fill in the administrative details of how copyright law would work. Importantly, the first draft of the new law appears to have been written not by a member of Congress, but by Noah Webster (Patry 1994)! Webster, cousin to Senator Daniel Webster, was the author of numerous textbooks and, of course, the famous dictionary that still bears his name. His draft of the copyright act, which was not adopted in full, would have extended copyright not just to authors, but also to booksellers and printers. As it was, the 1790 law covered not only books but also maps and charts (a rather broad reading of the Constitution’s writings). Webster also was instrumental in getting the 1831 act passed. The 1831 act doubled protections from 14 to 28 years. Writing to Eliza W. Jones, Webster noted,

[My] business in part was to use my influence to procure an extension of the law for securing copy-rights to authors.… By this bill the term of copy-right is secured for 28 years, with the right of renewal… for 14 years more. If this should become law, I shall be much benefited.

Webster to Eliza W. Jones, January 10, 1831Footnote 6

The copyright act gradually has increased in scope, beginning with the inclusion of maps and charts, then adding prints (1802), musical compositions (1831), plays (1856), photographs (1865), paintings, drawings, and statues (1870), motion pictures (1912), sound recordings (1971), computer programs (1980), and architectural works (1990), among other increases in scope (Bell 2014). Judges as well as legislators expanded copyright. For example, copyright initially applied only to copying. In 1853, when Harriet Beecher Stowe sued over an unauthorized German translation of Uncle Tom’s Cabin, the court ruled that she had no right to prevent translations.Footnote 7 By the end of the nineteenth century, however, through both common law and legislation, copyright had expanded beyond copying to also encompass derivative works.

Nevertheless, until the mid-1970s, copyright law overall remained relatively modest. The term of protection was still only 28 years, renewable for another 28 years. Formal registration still was required and notable limits remained in place. Sound recordings, for example, received protection only in 1971, and even then, there was no exclusive public performance right—anybody could perform any song without paying royalties to composers or publishers. By 1976, however, the romance was gone.

2.3 The romance is gone

In 1976, Congress passed a new Copyright Act that radically altered the paradigm for copyright policy in the United States. Copyright terms were extended to the life of the author plus 50 years. Protection for works with corporate authorship was extended to 75 years. In addition, formalities were weakened (and then effectively abolished with the Berne Implementation Act of 1988, effective March 1, 1989). Protection was granted automatically to “original works of authorship fixed in any tangible medium of expression,” which includes even scribbles on a napkin. In 1998, copyright terms were extended yet again, to life of the author plus 70 years, or for corporate authorship to 95 years from publication or 120 years from creation, whichever is shorter.

No change provided better evidence of politics without romance, however, than the extension of copyright terms retroactively, to already extant works. A temporary monopoly may be justified by stronger incentives for future creation, but no incentive can increase the number of works created in the past. Retroactive copyright extensions merely transfer rents from consumers to content owners with no benefit to the public. As a result of the 1976 act and the 1998 extension, no new works will enter the public domain until 2019.

The decision to grant retroactive extensions for copyright terms has no basis in the public interest, but it is perfectly understandable through the logic of collective action, the public’s rational ignorance, and the industry’s rational information advantage. Litman (1987), the chronicler of the legislative history of the 1976 act, recounts how copyright producers essentially wrote the act with virtually no input from copyright consumers:

Most of the statutory language was not drafted by members of Congress or their staffs at all. Instead, the language evolved through a process of negotiation among authors, publishers, and other parties with economic interests in the property rights the statute defines. (pp. 860–61)

… Members of Congress revised the copyright law by encouraging negotiations between interests affected by copyright, by trusting those negotiations to produce substantive compromises, and by ultimately enacting those compromises into law.

This process yielded a statute far more favorable to copyright proprietors than its predecessor, containing structural barriers to impede future generations’ exploitation of copyrighted works. (p. 903)

Although copyright terms are the most visible indicator of the expansion of copyright law, in the past 15 years, copyright violations progressively have become criminal offenses. The Digital Millennium Copyright Act, enacted in 1998, established civil and criminal penalties for circumventing technical protection measures that could be used to violate copyright, whether or not there is an actual copyright infringement. The PRO-IP Act of 2008 raised both civil and criminal penalties for copyright, as well as patent and trademark, infringement. Lee (2012) documents the extent to which civil asset forfeiture has been used since it was legalized for copyright violations under the PRO-IP Act and argues that it violates the rule of law. “The federal government has begun seizing the domain names, servers, and other assets of online intermediaries,” he writes. “These seizures typically occur before the owners are convicted of any crime, and in some cases property is seized without its owners ever being charged.” (p. 56)

On almost every dimension—terms, subject matter, formalities, penalties—we have observed a substantial increase in copyright protection over the past 40 years.Footnote 8 Why? There is little evidence that this expansion of copyright is in the public interest.Footnote 9 Greater copyright protection creates a rent for owners of preexisting blockbuster content, but why has rent-seeking grown so much in recent decades? One possibility can be found in returning to the Virginia school maxim—concentrate benefits, disperse costs.

The costs of copyright protection always have been dispersed among the many consumers of content, but for a long period, the benefits were only relatively, not absolutely, concentrated. With the exception of the authors and publishers of textbooks and dictionaries—who we have already seen were noteworthy lobbyists for copyright extension—most authors and publishers were small players with little or nothing to gain from rent-seeking. Sales of most books—again, excluding dictionaries and textbooks—do not continue past 14 years, and even for books with long-lasting sales, the costs of lobbying would typically exceed the benefits to any single author, especially given that lobbying has no guarantee of success.Footnote 10

In the twentieth century, however, we saw the rise of large firms that owned millions and sometimes billions of dollars’ worth of intellectual property. Disney, the world’s largest media conglomerate, is not the only such corporation, but it is a good illustration of how benefits became concentrated over time and motivated more vigorous rent seeking.Footnote 11 In the twentieth century, Mickey Mouse twice came under threat of entering the public domain, and both times he escaped by a whisker. Under the 1909 law, Disney would have lost the Mickey monopoly in 1988, and Goofy and Donald Duck would have entered the public domain shortly thereafter. Mickey mobilized more effective lobbying efforts than did the heirs of the Brothers Grimm, however, so unlike Cinderella, Rumpelstiltskin, and Snow White, Mickey, Goofy, and Donald were saved for Disney by the 1976 law (effective January 1, 1978), which extended Disney’s rights to Mickey until 2003. Before the end of the century, however, Disney lobbyists went into full-court press mode for the Sonny Bono Copyright Term Extension Act (October 27, 1998), which extended Disney’s rights to 2023. No one would be surprised by a further extension before that deadline. It is noteworthy that almost the entire debate over copyright extension was over rights to existing works and not incentives for future creation, which can only be marginally increased by an extension of rights far into the future.

It is notable that when the Copyright Term Extension Act of 1988 was litigated in Eldred v. Ashcroft (2003) a number of prominent economists, including Nobel prize winners George Akerlof, Kenneth Arrow, Ronald Coase, Milton Friedman, and James Buchanan, signed an amicus curiae brief opposing the extension.

2.4 Virginia school analysis of patents

The scope of patent protection likewise has expanded in recent decades, although the patent statutes themselves have not changed much. Rather, it is the application of patent law that has changed dramatically, as the patent office and courts have loosened restrictions on patentable subject matter and other criteria for awarding and upholding patents. The change in the application of the law is most visible in software patents.

Through the 1960s, the US Patent and Trademark Office (USPTO) refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly US Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the US Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms were not patentable subject matter, and that since software is by definition just a complex string of ones and zeros, computer software could not be patented. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.

Although the Supreme Court allowed only a narrow scope for software patents, the lower courts eroded its ruling gradually. In 1982, at the urging of the patent bar (Landes and Posner 2004, p. 27), Congress consolidated appellate review of patent cases in a newly created Court of Appeals for the Federal Circuit (henceforth Federal Circuit), which was constructed out of the Court of Customs and Patent Appeals. The new Federal Circuit Court had jurisdiction not only over patent appeals arising out of the USPTO as its predecessor did, but also over the appeals arising out of federal district courts, giving it a near-monopoly over patent appeals.

Although the Federal Circuit Court has other areas of jurisdiction, patent cases comprise a large fraction of its docket. A number of scholars argue that this partial specialization and division of judicial labor can affect decision-making. As early as 1951, Rifkind (1951, pp. 425–26) warns that proposals to create a specialized patent court will lead to “decadence and decay.”

The patent Bar is already specialized. At present, however, patent lawyers practice before nonspecialized judges and accommodate themselves to the necessity of conveying the purposes of their calling to laymen. Once you complete the circle of specialization by having a specialized court as well as a specialized Bar, then you have set aside a body of wisdom that is the exclusive possession of a very small group of men who take their purposes for granted.

Landry (1993, pp. 1206–7) argues of the Federal Circuit:

Specialization widens the gap between the public and the decisionmaker. Authoritatively as well as geographically, the public loses sight as bureaucratization removes to expert control. The interested public is redefined to include only those who are part of the same specialized subculture as the decisionmaker. The rest of the public is marginalized—dismissible like the “quacks” who show up at rate-making proceedings. And the interests of the subculture that does [sic] get taken seriously do not necessarily coincide with the interests of the general public. In the patent context, patent attorneys, patent agents, corporations, and scientists all have reasons to favor a strong patent system (although it can be a double-edged sword). Yet the patent system was instituted by the people and for the people, not by the people and for the lawyers, corporations, and scientists. In short, specialization yields the old problem of capture: the fox guarding the chicken coop.

Specialization in this view can lead to a kind of regulatory capture but one based more on ideology and self-selection than on naked interest (compare Lopez 2010 with Klein 1994). Nevertheless, Bruff (1991, pp. 331–32) notes that specialization also can affect selection by concentrating the benefits to special interests of winning the judicial selection process

To some degree, then, it makes sense to regard the Court of Appeals for the Federal Circuit as partially captured by patent interests, and to extend the notion of regulatory capture to judicial capture. Support for this view comes from the legislative history of the act. Although the American Bar Association (ABA) as a whole opposed the creation of the Federal Circuit, the ABA’s Patent, Trademark, and Copyright Section strongly supported the bill, as did the Intellectual Property Law Association and the Intellectual Property Owners Association (Beighley 2011). The support of the Intellectual Property Owners Association is also telling. Although the academic discussion of the bill often was framed in terms of the need to create certainty and uniformity, the support for the bill came from those who wanted patent law to be certain, uniform, and strong. Indeed, the idea for creating the court was developed initially by President Carter’s Domestic Policy Review on Industrial Innovation. The patent committee of that commission, led by corporate patent counsel Robert Benson, who was also chair of the Patent, Trademark, and Copyright Section of the ABA, promoted the idea as leading to greater innovation through patent strength. The Carter administration failed to pass the bill, but it was supported strongly by the Secretary of Commerce under President Ronald Reagan, who signed the bill as a pro-business measure (Beighley 2011). In an address to the law clerks of the Federal Circuit, Judge Pauline Newman, the first appointee to the Federal Circuit, was clear about the purpose of the court:

The court was formed for one need, to recover the value of the patent system as an incentive to industry. The combination of the Court of Claims and the Court of Customs and Patent Appeals was not desired of itself, it was done for this larger purpose. This was our mission—our only mission.

Quoted in Beighley (2011, p. 702).

When the Federal Circuit is viewed through this lens, the effect it has had on patent policy becomes less surprising.(See Fig. 1)

Fig. 1
figure 1

Patents issued by year. Source US Patent and Trademark Office

The creation of the Federal Circuit Court is certainly correlated with a noteworthy increase in the number of patents issued. In 1982, the year Congress created the court, the US Patent and Trademark Office issued 63,005 patents. In 2012, the USPTO issued 275,966 of them, over four times as many. Some evidence suggests that this sea change may have been caused by the patent-friendly jurisprudence of the Federal Circuit. Using a dataset of district and appellate patent decisions for the years 1953–2002, Henry and Turner (2006) find that the Federal Circuit has been significantly more permissive with respect to affirming the validity of patents. They estimate that patentees are three times more likely to win on appeal after a district court ruling of invalidity in the post-1982 era. In addition, following the precedents set by the Federal Circuit, district courts have been 50 % less likely to find a patent invalid in the first place, and patentees have become 25 % more likely to appeal a decision of invalidity. Henry and Turner (2013) find a structural break in validity rulings in 1983. Naturally, if the probability that the courts will validate a given patent increases, we would expect to observe more patenting (see also. Hall 2005). Although still constitutionally subordinate to the Supreme Court, the Federal Circuit Court has eroded the limits on patent jurisprudence imposed upon it by the higher court.Footnote 12 In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones (Miller and Tabarrok 2014). Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.Footnote 13

By capturing the Federal Circuit, the patent bar has been able to expand patentable subject matter, the number of patents issued annually, and the amount of patent litigation. In short, as Landes and Posner (2004) argue, “The most certain effect of the creation of the court has been to increase the demand for the services of patent lawyers…”. To understand better the consequences of the expansion in both copyright and patent protection, we turn now to an analysis informed by the Bloomington school of public choice.

3 The Bloomington school’s eightfold path and the intellectual commons

Centered on the Workshop in Political Theory and Policy Analysis at Indiana University, Bloomington scholars study common pool resources (CPRs) and the institutions that govern them. Although many economists consider rivalry an essential element of a commons, Bloomington scholars define a commons more expansively as any shared resource, rival or nonrival, subject to social dilemmas (Hess and Ostrom 2006b, p. 3). By this more expansive definition, the stock of shared knowledge, ideas, creative works, and other intellectual resources can be viewed as a commons.

Contrary to what one would expect from a naïve application of the prisoner’s dilemma, Bloomington scholars have discovered that in the presence of the proper rules, a commons can provide collective benefits that are not available or are available only at much higher cost using other forms of property management. Thus, for Elinor Ostrom and the Bloomington school, the commons provides an opportunity, not just a tragedy.

Ostrom (1990), p. 90 and Wilson et al. (2013, p. S22) summarize eight principles often found in successfully managed, long-lived commons. The Bloomington eightfold path: (1) clearly define boundaries, (2) create a proportional equivalence between benefits and costs, (3) provide for collective choice over the rules of the commons, (4) monitor free riders and underminers, (5) create graduated sanctions for transgressors, (6) provide conflict-resolution mechanisms that are seen as fair, (7) recognize the rights of users to organize themselves, and (8) for common pool resources that are parts of larger systems, respect federalism and polycentric ordering by matching the scale of the provider with the scale of the commons or public good following the subsidiarity principle.

Since the Bloomington school has shown that a commons of rival resources can thrive with the right rules, we should a fortiori be optimistic that it is possible to have a successful intellectual commons of nonrival resources. The intellectual commons is a nearly perfect Lockean commons from which one can take as much as one wants and still leave “enough and as good” for others.

Indeed, the intellectual commons is super-Lockean because when IP law is well designed, those who draw from the commons eventually (barring Disney-type extensions) also supply the commons with new material from which others may draw. Thus, when IP law is well designed, those drawing on the intellectual commons leave more and better for everyone else.

3.1 The underuse of the intellectual commons

The intellectual commons is a fountainhead that does not run dry, a source of ideas to build upon, revise, mix, combine, and develop. Thus, the free riders who need to be monitored (principle 4) are not those who use the commons but those who fail to expand the commons. Walt Disney drew heavily on the common stock of fables and fairy tales, but the Disney Corporation has enclosed its fables and fairy tales behind fearsome walls. In other words, since a commons does not necessarily lead to tragedy, it follows that the destruction of a commons, whether by privatization/enclosure or centralization, can reduce welfare (Ostrom 1990).

The extension of copyright law, for example, has diminished the intellectual commons. Had the copyright law that existed before 1976 continued in place, then works published in 1956 would have entered the public domain and been freely available to produce, publish, revise, and build upon on January 1, 2013. Among the books and movies that would have entered the public domain are

Books

Movies

 Winston Churchill, A History of the English-Speaking Peoples, Vol. 1 & 2

 Around the World in 80 Days

 Philip K. Dick, Minority Report

 The Best Things in Life Are Free

 Ian Fleming, Diamonds Are Forever

 Forbidden Planet

 Fred Gibson, Old Yeller

 Godzilla, King of the Monsters!

 Billie Holiday, Lady Sings the Blues

 It Conquered the World

 Alan Lerner, My Fair Lady

 The King and I

 Eugene O’Neill, Long Day’s Journey into Night

 The Man Who Knew Too Much (1956 remake)

 Dodie Smith, 101 Dalmatians

 Moby Dick

 John Osborne, Look Back in Anger

 The Searchers (1956 film version)

  1. Source Center for the Study of the Public Domain

Under current copyright law, none of these works will enter the public domain until 2052. In fact, because the Copyright Term Extension Act of 1998 retroactively extended copyright terms by 20 years, no published works will enter the public domain until 2019.

It has been argued that, in addition to stimulating new work, copyright can also encourage the efficient exploitation of existing work. In justifying the retroactive extension of copyright terms in 1998, for example, Congress argued that extensions “would provide copyright owners generally with the incentive to restore older works and further disseminate them to the public.” The argument may apply to the occasional work that requires extensive sunk costs to restore or disseminate, but the empirical evidence rejects the argument for most works.

Heald (2007, 2013), for example, looks at a random sample of books available from Amazon.com by decade of publication. A large number of books are available from recent decades, but a large number of books are also available from before 1920, i.e., the period when works begin to enter the public domain. In between, as Fig. 2 shows, is a valley of deadweight loss, a dearth of books that could generate consumer surplus but are not available due to copyright restrictions that render their production unprofitable.

Fig. 2
figure 2

Estimated Amazon titles by percent by decade of publication. Note: Book titles based on a random sample. Includes fiction and non-fiction books. Source Based on Heald (2013) with permission. Notation added

Patents can also diminish the intellectual commons, not simply in the sense that a patented product or process is not freely available, but because patents can diminish innovation. When ideas build upon ideas—cumulative innovation—patents raise the cost of innovation (Tabarrok 2011). When multiple patents are inputs into some new product or research program, a multiple marginalization problem arises, and the transaction costs associated with bargaining to an efficient solution can be prohibitive (Heller 1998; Buchanan and Yoon 2000). If a new product uses 10 patents and the owner of each patent wants a third of the new product’s revenue, then the new product will not come to market. Cutting-edge products like smartphones build on many thousands of licensed patents.Footnote 14 Although Apple and Samsung have been able to navigate these licensing waters well enough to bring profitable products to market, they are constantly engaged in expensive, rent-seeking patent litigation. The patent litigation thicket reduces innovation and creates economies of scale in the legal department, making smaller firms less competitive.

Of course, the incentive effects of patents (and copyright) in generating new ideas are well known, so there are inevitable tradeoffs. Figure 3 illustrates a hypothetical relationship between innovation and patent strength that is analogous to a Laffer curve, showing that after some point, greater patent strength reduces innovation. We don’t know for certain at what point the curve bends, and it likely bends at different points for different types of intellectual property, but as patents become stronger, the incentives for new innovation decline while the disincentives for using and building on old innovations increases, so the curve must bend eventually. A simple point, yet the idea that greater patent strength can reduce innovation is often overlooked when the tradeoff is framed as being solely between innovation and consumption.

Fig. 3
figure 3

The innovation to patent strength curve: Stronger patents can reduce innovation

The greater the role of cumulative innovation, the more likely it is that the tradeoff will turn negative. Murray et al. (2009) exploit a natural experiment in research involving patented mice, in consequence of an unanticipated shock in which some varieties of engineered mice, but not others, suddenly became available on an open access basis. They find that greater openness encouraged the exploration of more diverse research paths, including those followed by entrants new to the mouse research arena. Importantly, they find no decline in the creation of new varieties of mice. This suggests that intellectual property law could be shrinking the commons rather than expanding it at current margins.

More generally, although the number of patents exploded after the creation of the Federal Circuit, no increase in economic growth is discernible. If anything, the period since 1973 has been one of a Great Stagnation (Cowen 2011). The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Boldrin and Levine (2013, p. 3) put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”

In addition to the problem of cumulative innovation, the direct and indirect legal costs associated with patenting are substantial. Copyright infringement requires that a work has been copied, but patent infringement does not require that an idea has been copied. As a result, firms often invent technologies independently and then find themselves sued for patent infringement. Some so-called nonpracticing entities (NPEs), or “patent trolls,” search out obscure patents that might be interpreted to cover broad new technologies. NPEs acquire such patents and then bring lawsuits calling for injunctions against potentially infringing firms just as, for example, the defendant firms are bringing a product to market. Having invested in launch costs, the company may find it in its interest to settle. Since even spurious cases can cost $100,000 or more to defend, patent trolls can extort an amount less than that from defendants in many cases (Watkins 2014).

To the extent that such lawsuits transfer resources to innovators, they may be considered a necessary part of the patent system, but Bessen et al. (2011) find that such lawsuits cost defendants much more than they benefit plaintiffs, let alone innovators. Using an event study methodology, they find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in market capitalization, while plaintiffs gain less than 10 % of that amount.Footnote 15 These figures are virtually impossible to reconcile with the hypothesis that the US patent system is functioning well. Turner et al. (2013) find that the gap between private costs and private benefits of the patent system is not just positive, but growing.

3.2 Self-organization and the intellectual commons

The Bloomington school discovered many thriving commons for which standard theory would have predicted inevitable tragedy. One key to the successful management of these thriving commons has been buy-in from participants who have an active say in how the commons is managed (principles 3, 6, and 7).

Wikipedia provides a startling example of the power of these principles. Wikipedia has successfully built an enormous repository of 30 million encyclopedia articles in 286 languages, including over 4.2 million in English alone. By contrast, the Encyclopædia Britannica has 120,000 articles in its online edition, which is almost twice as many as appear in its print edition. According to one ranking, Wikipedia is the seventh most visited site in the online world.Footnote 16 What is astonishing is that Wikipedia is written and edited by a community of volunteers. Almost all Wikipedia articles can be edited at any time by anyone, anywhere, with an Internet connection.

Kevin Kelly, Wired magazine’s founding editor, has said that Wikipedia “is one of those things impossible in theory, but possible in practice.”Footnote 17 Why should anyone contribute without compensation to create a shared resource?

In part, the answer is that Wikipedia has hit upon a set of institutions that reward the human desire to communicate and to explain without burdening contributors with bureaucratic rules. Even newcomers are encouraged to be bold in editing articles, because the change history for every page is stored, and therefore it is impossible to commit an irreversible error.Footnote 18 Also, anyone can automatically monitor changes to any page, so a defaced page is often restored within minutes, giving underminers little incentive to undermine (principle 4).

Contributors to Wikipedia govern themselves. Each Wikipedia article is bundled with a Talk page, on which editors can discuss, argue, vote, or otherwise resolve disputes and come to a consensus. The Talk page is both a conflict-resolution mechanism that is perceived as fair (principle 6) and a form of self-government (principle 7) that operates on a page-by-page basis, hence polycentric governance (principle 8).

Wikipedia’s success demonstrates that with the right institutions and community norms in place—and with a little help from copyright law as we will discuss in the next section—very extensive public goods can sometimes be produced in a collaborative manner.

The Internet, itself a global commons used by billions, was developed almost entirely by voluntary consensus. The Internet Engineering Task Force (IETF) developed and still maintains the Internet Protocol, a low-level system of communication used by every machine on the Internet, as well as several general application protocols like HTTP and email. Anyone can join the IETF; it has no formal membership and operates by consensus.Footnote 19 It issues no laws or regulations, only documents that others regard as “official” standards.

The extent to which public goods can be produced by voluntary associations highlights a larger dimension of Bloomington research. Vincent Ostrom (1997) argues that true democratic governance is the result of people working together in open, self-organized communities. Nowhere is this better illustrated than with the Internet. Describing how the Internet Engineering Task Force is governed, Dave Clark, a distinguished early Internet engineer, famously said, “We reject: kings, presidents, and voting. We believe in: rough consensus and running code” (Borsook 1995). For V. Ostrom, the self-governing voluntary associations that coordinate the production of Wikipedia, open-source software projects, and the Internet itself would represent democracy in the best sense of that word. The success of these institutions in contributing to the knowledge commons is a testament to the power of a robust civil society.

3.3 Maintaining the intellectual commons

It is conventional to think of creative works as permanently accessible and infinitely lived, just as we did when discussing the intellectual commons as Lockean or super-Lockean. The Bloomington school pays close attention, however, to the institutions and actors that are necessary to maintain a commons and make its fruits widely available.

In the digital era, for example, we rely increasingly on search technologies to find resources. If a resource in the digital commons can’t be found, it may as well not exist. Search, however, requires not just technology but appropriate institutional rules as well. Search in the digital world is based on indexing. Adding a work to an index is essentially creating a copy of the work. The Authors Guild filed a suit on those grounds challenging the scanning and indexing of books by Google. In November 2013, the trial court ruled in favor of Google, holding that indexing books for the purpose of offering search capabilities falls within fair use. The case likely will be appealed, and of course even if the verdict is upheld, international issues remain.

The judge in the Authors Guild case noted that “a reasonable factfinder could only find that Google Books enhances the sales of books to the benefit of copyright holders. An important factor in the success of an individual title is whether it is discovered—whether potential readers learn of its existence.”Footnote 20 Transaction costs, holdouts, and problems associated with “orphan works” are costs of relying on a permission-based system for indexing.Footnote 21 An expansive definition of fair use improves the use of the commons by enabling cheap and efficient indexing. Furthermore, the judge argued that Google Books is a transformative technology: “Google Books permits humanities scholars to analyze massive amounts of data—the literary record created by a collection of tens of millions of books. Researchers can examine word frequencies, syntactic patterns, and thematic markers to consider how literary style has changed over time.”Footnote 22 That Congress did not anticipate this use of copyrighted material underscores the importance of carving out a space within copyright law for experimentation without the need to ask for permission. Competition is a discovery procedure and is important only “insofar as its outcomes are unpredictable” (Hayek 2002).

Search on the web also raises difficult issues of property law. Whose property is being searched when a person or a robot accesses a website? Some courts have applied the common-law theory of trespass involving chattel (interference with movable property) to rule that unwanted search of a website can be a tort (e.g., Oyster Software v. Forms Processing 2001, somewhat weakened to require a showing of harm in Intel v. Hamidi 2003). The legal issues have not been fully resolved, but an interesting private norm has developed. The Robot Exclusion Standard is a convention that web-crawling robots will not access or archive specific files and directories indicated in a robots.txt file. Most well-behaved search engines and other services that use crawling respect the robots.txt file in the interest of good web citizenship, although, as of yet, it is not legally binding.Footnote 23 The Robot Exclusion Standard is a good example of the Bloomington finding that a commons—in this case a website accessible by anyone in the world—can be maintained by privately evolved norms rather than by law. Of course, the system is imperfect, and there is no guarantee that it will continue to work globally. Nevertheless, the alternatives also are imperfect, and norms will continue to evolve to meet new technological challenges. We should not commit the Nirvana Fallacy of comparing the actual with the ideal (Demsetz 1969).

Search issues also interact with patent policy. Broad and fuzzy patents make it difficult for firms to know when or if patent infringement has occurred, or if a patent indeed exists (Bessen and Meurer 2008). Chemical formulas are standardized, so a pharmaceutical firm, for example, can search for a standardized chemical formula in a database to see whether its innovation may infringe on a patent. In many industries, however, firms don’t bother with patent searches because patents based on loose concepts with fuzzy boundaries—especially in industries without standardized terminology—make it impossible for a search to reveal the absence of a patent conclusively. Patent search fails to scale as the number of patents with fuzzy boundaries increase (Mulligan and Lee 2012).Footnote 24 Thus, the lack of clear patent boundaries reduces the potency of search and defeats one of the key arguments for patents, the dissemination of information about innovations.

Law can be used to expand, as well as to enclose, the commons. Wikipedia, for example, leverages copyright law not to enclose the intellectual commons, but to expand it. Wikipedia’s content is copyrighted, but it is licensed under a Creative Commons Attribution-ShareAlike license. Under this license, anyone is free to copy and distribute the work or to alter it. But as a condition of the license, users are required to attribute the content to Wikipedia and to license any derivative work under the same or a similar license. This last element, the requirement to license derivative works in the same or similar manner, makes Wikipedia’s license a copyleft or viral license. Viral licensing ensures that additions to the Wikipedia corpus remain a part of the commons.

Copyright law has also been leveraged to produce software, another good that can be produced by accretion through the contributions of many. The Linux kernel is licensed under a viral license and is now used in a wide range of applications, including a billion smartphones, as well as 476 of the top 500 supercomputers in the world. Linux expanded from close to 10,000 lines of code in 1991 to about 15 million lines of code today.

Taking inspiration from viral licenses in a copyright context, Schultz and Urban (2012) propose a similar approach for patent licensing. They supply a standardized open patent license in which members of an “open innovation community” agree to perpetually license their portfolios of patents to other members. Members of the community retain rights to the use of their patents against nonmembers, which is important for defensive purposes. One advantage to this approach is that a network effect is leveraged within an industry. For example, if two or three software firms with large patent portfolios participate in an open innovation community, it may be in the interest of most other software firms to also join the community in order to have access to the patents. As more firms join, the greater the value of joining. In equilibrium, it is possible that all firms in a given industry would be members of the same community, effectively abolishing patents for that industry, at least among practicing entities.

The license proposed by Schultz and Urban is under consideration by some Silicon Valley firms, which are among those most harmed by overbroad patenting. Google is in the process of seeking input from its peers regarding several royalty-free patent-licensing approaches,Footnote 25 including the Defensive Patent License presented in Schultz and Urban’s article.Footnote 26 Google is also considering less aggressive approaches, including nonperpetual licensing, licenses that apply only to transferred patents, and field-of-use specific licensing. While patent pools have been used at least since the Sewing Machine Wars of the 1850s,Footnote 27 the current discussion explicitly acknowledges the benefit of an information commons. Existing pools usually cut through a limited patent thicket, and they are not open to all interested firms. They arguably can serve as entry barriers, since excluded firms won’t be able to compete in the industry. But open-invitation, portfolio-wide, reciprocal patent licensing is aimed at expanding the commons, not merely addressing one specific patent thicket.

The default rules for government protection of monopoly is a related issue. Copyright, for example, used to require formalities such as registration, notice, and renewal, but these requirements were largely abandoned in the United States with the Berne Implementation Act of 1988. The default rule is that every work is automatically copyrighted the moment it is fixed in some tangible form. The abandonment of formalities has created an orphan works problem, works whose legal status is not apparent. Under current US law, works published before 1922 are in the public domain, but if a work does not contain a copyright notice, ascertaining when it was published may be impossible. The identity of the author of the work also may be impossible to discover, because some published works, like photographs, do not include the name of the author in the body of the work.

More generally, the default rule of automatic copyright suggests that the intellectual commons is of so little value that it is automatically bypassed. A more balanced rule could, without disparaging the rights of authors, recognize that government protection has a significant opportunity cost because of the value of sustaining the intellectual commons. Patent renewal and maintenance fees of more than nominal size would serve a similar purpose.

Maximizing the benefit of the commons requires that the courts think about copyright law and patent law as not simply a set of rules to protect the rights of authors and inventors but also as a set of rules for managing the intellectual commons. When a work enters the public domain, a copyright’s purpose is not ending, but in many ways is beginning. How and when content enters the public domain, how content is discovered, and how content can be used are some of the most important aspects of copyright law. Similarly, patent law needs to be developed in light of its influences on search and discovery and on the diffusion of information, both during the life of the patent and afterward.

4 Tensions, synergies, and reforms

The Virginia and Bloomington approaches to intellectual property are for the most part compatible, but there are tensions as well as synergies. The first tension is Virginia’s insistence on examining the incentives facing political actors and Bloomington’s emphasis on the roles played by various institutions and actors. For example, a number of authors in Hess and Ostrom (2006a) place a heavy emphasis on the virtuous role that librarians, among others, play in nurturing the knowledge commons. Undoubtedly librarians and other information professionals do play such a role, but it is nevertheless inconsistent with a Virginia approach to assume that librarians are more virtuous than other actors, including politicians. “Library science without romance” does not have the catchiness of Buchanan’s original phrase, but as Hume argued ([1777] 1987), with concurrence from Brennan and Buchanan (2000), all actors ought to be considered knaves, at least for the purposes of creating robust institutions.

Many of the Bloomington school’s objections also could be remedied through greater governmental knowledge production or funding of knowledge production. Federal funding for agricultural and medical research both in the United States and around the world, for example, appears to have had a high return (Evenson 2001; Murphy and Topel 2003). Virginia-influenced scholars, however, tend to be more skeptical about the returns to be had from politicizing science. Nevertheless, some institutions may be more robust to rent-seeking and politicization than others. The Morrill Act of 1860, for example, did not fund research directly but instead provided land to the states to use in the creation of colleges and universities devoted to research and teaching “agriculture and the mechanic arts.” Similarly, the National Institutes of Health (NIH) funds research through a decentralized, peer-review process, driven for the most part by scientists themselves. Rent-seeking is not eliminated in either approach, but both approaches are likely to be much more valuable than scientific earmarks, which open up the scientific funding process to the same incentives, lobbyists, and favor trading that bring us bridges to nowhere (Savage 2000; de Figueiredo and Silverman 2007).

On the margin, governments have many opportunities for “nudging” to enhance the public domain. In 2008, for example, the National Institutes of Health required that any published results of NIH-funded research be made available within 12 months at PubMed Central, a free, full-text online archive. The NIH’s public access policy enhances the public domain while also providing a window of opportunity for publishers to recoup their costs.Footnote 28

The NIH policy requires that publications resulting from federally financed research enter the public domain after one year, but what about patents resulting from federally funded research? Two proposals dominated the discussion in the second half of the twentieth century. Senator Harley Kilgore, whose distrust of monopoly dated back to his father’s business loss to Standard Oil, proposed that federally financed research be patented by the federal government and then placed into the public domain. Vannevar Bush proposed that patent rights should be assigned to the private firms doing the research in order to promote commercialization. Until 1980, neither proposal won out, resulting in the worst of both: a default rule that patents were assigned to the federal government initially, but with no standard policy of placing the research into the public domain. Moreover, businesses could obtain patent rights, but only if they could reach an agreement with their funding agencies. Agreements were nonstandard both through time and across agencies, leading to high transaction costs, especially in the common situation when more than one agency funded the same research. A concern therefore arose that not enough federally funded research was being commercialized (Sampat 2006).

The Bayh-Dole Act of 1980 resolved the situation in favor of Vannevar Bush. The act changed the default rule from federal ownership to private ownership of patents resulting from federally financed research. At first, the rule applied only to universities and small businesses, but as the rent-seeking motive attracted organized interests, the rule was quickly extended by executive order to large businesses (Sampat 2006).

The effect of the Bayh-Dole Act appears to have been most significant on university patenting. Before the act, a strong norm existed that universities were simply not in the business of patenting. After 1980, a new norm supplanted the old, essentially saying that university patenting was a sign of entrepreneurship and public utility. The opportunity for universities to patent likely also encouraged (1) more patenting of the basic or foundational ideas that universities were designed to produce and (2) less research in basic or foundational ideas and more research in ideas closer to commercialization (Rai and Eisenberg 2004; Sampat 2006). In other words, the Bayh-Dole Act likely encouraged universities to behave more like private firms and less like producers of public goods, creating a deficit in the total innovation system.

In some cases, the NIH has tried to push back against default private patenting, especially for foundational ideas such as those associated with the human genome project. The Bayh-Dole Act, however, greatly restricts the conditions that agencies can impose on fund recipients. Rai and Eisenberg (2004) argue that the act should be modified to give funding agencies greater leeway to establish conditions on any potential IP rights before funding. Another possibility would be to follow Tabarrok’s (2002) suggestion to tie the length of the patent term more closely to sunk costs of research and development. It is one thing to grant a pharmaceutical firm a 20-year monopoly on a drug produced after spending a billion dollars of private revenue on research and development. Conversely, it’s another to grant a defense firm a 20-year monopoly of an idea produced in the course of developing a product for the military. As the Virginia school reminds us, the incentives of the NIH and other agencies do not necessarily align with the public interest. Nevertheless, it seems likely that the biases of the political process, regulatory capture, and so forth would mostly push the political process to grant rights to private interests that are too strong rather than too weak.

Despite these tensions, the Virginia and Bloomington approaches offer considerable synergies. At a high level, the Virginia school explains why intellectual property law has expanded in recent decades, and the Bloomington school gives us the tools we need to interpret the consequences of that expansion. Combining ideas from the two schools can help determine which reforms are most likely to increase innovation in creative works and inventions. The goal should be the creation of alternative institutions that are more difficult for the content industry and patent bar to capture, while excluding less and still rewarding creativity.

4.1 Reforming intellectual property law

We have already noted a number of potential reforms. The decriminalization of intellectual property infringement, making infringement in all cases only a civil offense, is another reform that coheres with the perspective of both schools. From a Virginia public choice perspective, criminal prosecution is a subsidy, meaning that rights holders can free ride on the investigative resources of the state. To Bloomington scholars, criminal prosecution seems to violate the finding that commons institutions that work well implement graduated punishments (principle 5).

Interestingly, content owners are beginning to discover on their own that severe but infrequently imposed punishments are not as effective a deterrent as mild, graduated, and more certainly imposed sanctions. A decade ago, the recording industry sued a small minority of individual casual file sharers for millions of dollars of damages—$150,000 per violation, when users were routinely sharing 1,000 songs at a time.Footnote 29 These cases created a lot of resentment among music fans and failed to deter piracy. Within the past year, however, the content industry has taken a different approach. The major content companies and industry associations have established a program with the cooperation of the largest American Internet service providers to privately impose minor, graduated sanctions on illicit file sharers. The system, called the Copyright Alert System and informally known as the “six strikes” program, notifies customers when their Internet account is used for piracy, and imposes penalties ranging from warnings to a temporary slowdown of Internet service. While it remains to be seen whether this program is effective, the restrained nature of the punishments and the fact that it is based entirely on private agreements make it arguably desirable from both Virginia and Bloomington perspectives.

A second reform for patents would be to abolish the Federal Circuit Court of Appeals and return to the pre-1982 system in which all circuit courts could hear appeals from cases arising in their districts. From a Virginia school perspective, such an approach would make patent jurisprudence less subject to influence by the patent bar, since judges would be nonspecialists. From a Bloomington view, enabling a greater diversity of voices at the appeals level would allow more expansive experimentation in rule articulation, which could improve the law in the long run.

Another possible reform would be less reliance on intellectual property to reward innovation and more on prizes, voluntary contributions and assurance contracts (Tabarrok 1998, 2011).

4.2 What is the optimal scale of IP policy?

In recent years, IP law has moved from being national to global, with nations bound by international agreements such as the 1994 Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) and institutions such as the World Trade Organization. Many intellectual resources are global public goods; some pharmaceuticals, for example, benefit someone nearly everywhere. In theory, therefore, policymaking at the global level would be optimal if costs and benefits could be made proportional by focusing attention on the diversity of facts on the ground (Bloomington principles 2 and 8). In practice, transaction costs render global policymaking based on self-organization, consensus, and bargaining highly unlikely. Hegemony is the dominant outcome.

Rent-seeking, rather than a matching of public good production to public good scale, has driven the evolution of global IP law (Drahos 1996). In the late 1980s and 1990s, US IP producers, most notably the CEOs of Pfizer, IBM, and Du Pont who sat on the President’s Advisory Committee for Trade Policy, lobbied to link US trade policy and intellectual property policy (Devereaux et al. 2006). The Office of the US Trade Representative put countries on notice that if they did not protect US intellectual property, sanctions would be placed on the exports of deviating countries. The linkage approach to IP culminated in the 1994 TRIPS agreement. TRIPS required that all signatory countries extend copyright protection to at least 50 years, abandon all copyright formalities (which, as we noted earlier, privileges enclosure over the public domain), extend patent protection to at least 20 years, and allow the patenting of “inventions” in all “fields of technology” (arguably including software), among other provisions. Rent-seeking today influences global policy, not just national policy.Footnote 30

The TRIPS agreement has been controversial, especially regarding pharmaceuticals in developing countries. The controversy is not surprising, given that hegemony conflicts with the Bloomington principles of successful common pool management such as participation, self-organization, accessible and independent conflict resolution, and the proportioning of costs to benefits. The Bloomington school principles suggest that the global IP system as it currently exists is not an example of a successfully managed common pool resource, and it will therefore be stable only so long as the hegemon is stable.

5 Conclusion

The application of ideas from both the Virginia school and the Bloomington school to intellectual property substantially revises the basic economic narrative of IP. While innovators must be compensated in some way, these perspectives remind us that better and worse ways of achieving that goal exist. In the last several decades, government institutions that reward innovators have become more responsive to rent-seeking and more hostile to innovation. In evaluating the possibilities for reform, it is important to consider the ways in which nonmarket decisions—both political and institutional—interact with intellectual property.