1 Introduction

Nearly every government, regardless of political model or ideological orientation, is today concerned with fake news and the ostensibly elevated combativeness of media discourse. New concerns with fake news and discursive conflict reflect, in part, the structural changes in news content, delivery, and interpretation enabled by social media (Fagan 2018). Lawmakers are faced with the decision to implement new rules or continue enforcing status quo law. While the status quo in most jurisdictions generally provides for platform immunity for user-generated content, lawmakers can nonetheless regulate platforms through the enforcement of other rules. For instance, the U.S. Federal Election Campaign Act (FECA) prohibits willful causation of campaign advertisement purchases by non-U.S. persons or entities.Footnote 1 Platforms that knowingly induce purchases of campaign advertisements from foreign persons or organizations may be subject to criminal liability. Enforcement of other rules that govern platforms, even if liability is not dependent upon user speech, can moderate platform content inasmuch as compliance with other rules impacts prevailing levels of platform content and civility. Other rules that govern platforms—including competition rules, various privacy laws such as the E.U. General Data Protection Regulation (GDPR), and rules that require expedient notification of data breaches—also moderate content inasmuch as they change the incentives for hosting it.Footnote 2 Further, lawmakers can directly prosecute users when they engage in prohibited speech, which also exerts a moderation effect. Thus, even in jurisdictions where platforms are immune from liability,Footnote 3 there remain rules which determine the disposition of platform discourse. By contrast, platform liability regimes, such as those embodied in Australia’s Sharing of Abhorrent Violent Material Bill, Germany’s Network Enforcement Act, and Singapore’s Protection from Online Falsehoods and Manipulation Bill, hold platforms liable for the speech of their users. Liability requires some level of knowledge. Thus, these rules provide for platform liability when the platforms themselves fail to remove illegal content after receiving notice, but the point is that under these regimes, platforms are held liable for user speech.

Ideally, lawmakers are concerned with the institutional health of their societies. All governments, especially liberal democracies, rely on citizen discourse as a lawmaking input, and its disposition is generally understood to be positively correlated with good lawmaking and robust institutions (Rawls 1993). At a minimum, citizens that exchange views are more likely to empathize with each other and reach a broader consensus that elevates social welfare. Moreover, discourse that is markedly civil, that is, discourse in which citizens dispassionately share their views, is associated with higher levels of bargaining and preference satisfaction (Bejan 2017). Civility leads to the inclusion of social groups and engagement between opposing groups, which expands the political bargaining space and resultant gains from trade. Incivility exerts the opposite effect. While the model below does not explicitly model the effect of fake news on civility, higher levels of fake news may be conducive to higher levels of discursive conflict, disengagement, and polarization. Simultaneously, greater conflict may generate greater demand for fake news.

Regulating speech, however, is socially costly. Fake news can be difficult to ascertain. Discerning civil from incivil discourse at the margins is demanding. Thus, liberal speech regimes are generally predicated upon avoidance of error (Posner 1986). They are also predicated on the competitive screening of political ideas. Higher levels of information, enabled by loose restrictions on speech, lead to intense competition and better ideas. Moreover, many of today’s fora for exchanging political ideas serve to reinforce group loyalty and provide entertainment (Bloom 2016: 236). Reduction of fake news and incivility within those fora can have little impact on institutional health inasmuch as their participants are isolated. On the other hand, if a forum is considered a public good, or if private speech acts generate externalities, then there can be a basis for regulation [Packingham v. North Carolina, 582 U.S. _ (2017), Coase (1974), but see Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921 (2019)]. As such, lawmakers interested in the institutional health of their societies should concern themselves with user-generated platform speech when the costs of market failure become excessive.

In the model below, lawmakers consider the social costs of incivil speech on platforms. By assumption, platform incivility generates institutional decay, which lawmakers seek to minimize when setting a social welfare-maximizing platform immunity policy. The model demonstrates that lawmakers prefer immunity, even if user incivility and platform non-compliance with other rules is increasing, if the costs of implementing a platform liability regime are greater than the costs of enforcing status quo law. In addition, inasmuch as implementation of a platform liability regime is coupled with speech restrictions that are unconstitutional or prohibitively costly, lawmakers prefer immunity, but platforms are free to set strong content moderation policies consistent with existing law. Thus, the private governance function of platforms highlighted by Balkin (2018), Klonick (2018), Langvardt (2018), and others is directly related to lawmakers’ ability to enact and enforce alternatives, and further, it goes beyond private enforcement of existent free speech restrictions. Inasmuch as lawmakers are prohibited from suppressing unwanted speech by constitutional limits, as well as excessive lawmaking and enforcement costs, they give platforms wider discretion to make private suppression decisions. The status quo governance function of platforms, therefore, entails a private lawmaking function for determining which types of speech to suppress.

2 Model

The model describes a unit measure of platform users i and and a unit measure of platforms p, both with heterogeneous preferences independently drawn from absolutely continuous distributions. Benevolent lawmakers minimize social costs based upon their anticipation of the effect of the platform immunity regime on institutional health. This effect is referred to as incivility for exposition.

The model consists of two periods. In the first period, the actual effectiveness of the immunity policy on incivility is unknown. Lawmakers move first and choose \(\lambda \in \left\{ 0,1\right\} ,\)which is the binary decision to continue status quo immunity \(\left( 0\right)\) or implement a new platform liability regime. The status quo regime has some bite. As explained above, lawmakers can enforce existing laws that impact platform content through rules like FECA and the GDPR.Footnote 4 In addition, lawmakers can enforce existing laws that forbid illegal speech, such as defamation and incitement to imminent lawless action. Implementation of a new content moderation regime, by contrast, entails lawmaking and additional enforcement costs (together referred to as implementation costs) in order to hold platforms liable for user-generated content.

Once lawmakers choose the liability regime, platform users choose their aggregate level of incivility \({\bar{k}}\). Platforms then observe aggregate incivility and optimally set their internal compliance policies, which gives \({\bar{h}}\), the aggregate level of platform non-compliance.Footnote 5 The model permits selection of internal compliance policies on any basis, including profit, corporate image, long-term viability, good citizenship, and a desire for friendly legal environments. In the second period, lawmakers decide to continue the policy of status quo immunity or replace it with a platform liability rule. The model treats period 2 as notional. Its significance is that at the beginning of the period, lawmakers can observe a signal related to the actual effectiveness of the chosen regime in reducing incivility.

The precision of the signal, by assumption, is increasing in the level of civility (i.e. reductions in incivility) chosen by platform users, which is then reflected in platform discourse, in period 1.Footnote 6 It is further assumed that the social value of the lawmakers’ decision is increasing in the precision of the signal, that is, welfare-maximizing lawmakers make better decisions as their understanding of a policy outcome increases. Thus, the underlying premise of the model is that social value of the period 2 decision is increasing for lawmakers and platform users in the period 1 level of civility of users, and its impact on platform discourse. This feature is modeled by including a “value of revealed information” term in the objective function of platform users. By backward induction, in any equilibrium, lawmakers take into account the platform users’ best response and choose \(\lambda\) accordingly in period 1. The order of play can be summarized as follows:

  1. 1.

    Lawmakers choose the platform liability regime \(\lambda\) from set \(\left\{ 0,1\right\}\) where \(\left( 0\right)\) is immunity and \(\left( 1\right)\) provides for liability for user-generated content.

  2. 2.

    Platform users observe \(\lambda\) and choose their aggregate level of optimal incivility \({\bar{k}}\).

  3. 3.

    Platforms observe \({\bar{k}}\) and set internal compliance policies, which give \({\bar{h}},\) the aggregate level of platform non-compliance.

  4. 4.

    Lawmakers observe \({\bar{h}}\) and \({\bar{k}}\), and their impact on institutional health, and choose to implement liability or continue immunity.

2.1 Social media user’s optimal civility

Given lawmakers’ chosen platform liability regime, platform users choose their aggregate level of incivility. An individual user’s utility function is given by

$$\begin{aligned} u_{i}\left( \sigma _{i},\omega _{i},\lambda \right) =-\left( \sigma _{i}, \omega _{i}\right) ^{2}-s_{\lambda }\left( y-\omega _{i}\right) ^{2}-t_{\lambda } \left( \omega _{i}\right) ^{2}+\beta W\left( \omega _{i}\right) \end{aligned}$$
(1)

where \(\sigma _{i}\) is the ideal policy location for the individual i and y is the policy location of the liability regime. The policy location determines whether specific user content is sanctioned. Note that the location is flexible enough to account for any policy of content screening. The location may simply prohibit incitement to imminent lawless action, or can prohibit abhorrent violent material and fake news in addition. The location also contemplates existing restrictions on speech such as fraud, defamation, and criminal hate speech. Both \(\sigma _{i}\) and y lie anywhere on the real line. Each platform user chooses an action \(\omega _{i}\), such as posting or commenting on platform content, which also lies on the real line. The distance \(\left| y-\omega _{i}\right|\) is the measure of a user’s incivility, which has a proportional penalty s attached that takes the form of first-, second-, and third-party sanctions.Footnote 7 Simultaneously, complying with the policy of the liability regime can itself be costly since compliance may require changes to behavior. These costs, denoted here by the function \(t_{\lambda }\left( \cdot \right)\), depend upon the prevailing liability regime \(\lambda\), and the difference between the policy location of the existing regime, i.e. the status quo normalized to 0, and the chosen action \(\omega _{i}\).

The term W is realized in the second period after lawmakers observe the aggregate incivility of platform discourse and decide to continue or change the liability regime. As a result, it is discounted by the factor \(\beta\), which represents the impatience of the populace for the welfare-maximizing regime. In this two-period formulation of the game, adjustment costs are incurred during the first period only so as to avoid modeling any strategic interaction in the second period. It is assumed for simplicity that lawmakers make the socially optimal decision in period 2, given the information revealed in period 1. Making this assumption avoids the need for modeling future periods repeatedly.

The function above can be rewritten with \(k=y-\omega _{i}\), which is the distance between the policy location of the liability regime and the action chosen by the user, and, which can be interpreted as the level incivility

$$\begin{aligned} u_{i}\left( \sigma _{i},\omega _{i},\lambda \right) =-\left( \sigma _{i}, -y+k_{i}\right) ^{2}-s_{\lambda }\left( k_{i}\right) ^{2}-t_{\lambda } \left( y-k_{i}\right) ^{2}+\beta W\left( y-k_{i}\right) \end{aligned}$$
(2)

Let \(t_{\lambda }\) be a quadratic cost function with fixed and marginal costs of moving away from the level of civility required by the prevailing policy of the platform liability regime

$$\begin{aligned} t_{\lambda }\left( \omega _{i}\right) =a_{\lambda }+b_{\lambda }\omega ^{2}= a_{\lambda }+b_{\lambda }\left( y-k_{i}\right) ^{2} \end{aligned}$$
(3)

Substitute Eq. 3

$$\begin{aligned} -\left( \sigma _{i},-y+k_{i}\right) ^{2}-s_{\lambda }\left( k_{i} \right) ^{2}-a_{\lambda }-b_{\lambda }\left( y-k_{i}\right) ^{2}+\beta W\left( y-k_{i}\right) \end{aligned}$$
(4)

Maximizing gives the individual user’s optimal level of incivility \(k_{i}\)

$$\begin{aligned} k_{i}=\frac{y-\sigma _{i}}{1+s_{\lambda }+b_{\lambda }}+\frac{b_{\lambda }y}{1+s_{\lambda }+b_{\lambda }}-\frac{1}{2\left( 1+s_{\lambda }+b_{\lambda } \right) }\beta W_{\omega } \end{aligned}$$
(5)

where \(W_{\omega }\) is the partial derivative of W with respect to \(\omega\).

As expected, incivility is higher the further the policy location of the liability regime is from the user’s ideal position \(\sigma\). However, the penalty s exerts a downward pressure on this response. In addition, incivility is higher the more radical the policy is, i.e. the further it is from the status quo. But again, the effect is dampened by the penalty. The marginal cost of compliance, \(b_{\lambda }\), has a significant influence on incivility as well. Inasmuch as

$$\begin{aligned} \frac{\delta }{\delta b_{\lambda }}\left[ \frac{b_{\lambda }}{1+s_{\lambda } +b_{\lambda }}\right] >0, \end{aligned}$$
(6)

incivility is higher if the marginal cost of compliance is higher. Finally, the last term indicates that as the marginal value of information revealed by the first period level of civility increases, the lower will be incivility.

Integrating over the complete distribution of individual choices gives the aggregate k for any given distribution, among users, of ideal policy locations \(\sigma\) of the platform liability regime. For any given distribution of \(\sigma\), e.g. \(f\left( \sigma \right)\), the aggregate level of user incivility is given by

$$\begin{aligned} {\bar{k}}=\int k_{i}f\left( \sigma \right) d\sigma \end{aligned}$$
(7)

Note that since integration is a linear operation, the various parameters effect \({\bar{k}}\) the same way they effect individual \(k_{i}\).

2.2 Platform’s optimal level of non-compliance

Given users’ aggregate level of incivility, and platform liability regime \(\lambda\), platforms set their internal content moderation policies. An individual platform’s utility is given by

$$\begin{aligned} u_{p}\left( \alpha _{p},\theta _{p},\lambda \right) =-\left( \alpha _{p},\theta _{p} \right) ^{2}-o_{\lambda }\left( y-\theta _{p}\right) -m_{\lambda }\left( \theta _{p}\right) ^{2} \end{aligned}$$
(8)

where \(\alpha _{p}\) is the ideal policy location of the content moderation regime for platform p and \(\theta _{p}\) is its chosen internal compliance policy.Footnote 8 Recall that y is the policy location of the content moderation regime. Both \(\alpha _{p}\) and y lie anywhere on the real line. The location of a platform’s compliance policy \(\theta _{p}\) also lies on the real line. The distance \(\left| y-\theta _{p}\right|\) is the measure of a platform’s non-compliance, which has a proportional penalty o attached in the form of fines, injunctions, legal fees, and other associated costs. Finally, administration of a content moderation policy is costly as it requires moderation of user content. These costs are denoted by the function \(m_{\lambda }\left( \cdot \right)\) and depend upon the platform liability regime \(\lambda\) chosen by lawmakers, as well as the difference between the policy location of the original regime, i.e. the status quo normalized to 0, and the location of the platform’s internal moderation policy \(\theta _{p}\).

Rewriting the equation above with \(h=y-\theta _{p}\), which is the distance between the policy location of the platform liability regime and an internal content moderation policy chosen by platform p, and which can be interpreted as a platform’s non-compliance, gives

$$\begin{aligned} u_{p}\left( \alpha _{p},\theta _{p},\lambda \right) =-\left( \alpha _{p}, -y+h_{p}\right) ^{2}-o_{\lambda }\left( h_{p}\right) ^{2}-m_{\lambda } \left( y-h_{p}\right) ^{2} \end{aligned}$$
(9)

Let \(m_{\lambda }\left( \cdot \right)\) be a quadratic cost function with fixed and marginal costs of moving away from the status quo

$$\begin{aligned} m_{\lambda }\left( \alpha _{p}\right) =c_{\lambda }+d_{\lambda }\alpha _{p}^{2} =c_{\lambda }+d_{\lambda }\left( y-h_{p}\right) ^{2} \end{aligned}$$
(10)

Substitute in Eq. 9 and get

$$\begin{aligned} -\left( \alpha _{p},-y+h_{p}\right) ^{2}-o_{\lambda }\left( h_{p}\right) ^{2} -c_{\lambda }-d_{\lambda }\left( y-h_{p}\right) ^{2} \end{aligned}$$
(11)

Maximizing gives the platform’s optimal level of platform non-compliance \(h_{p}\):

$$\begin{aligned} h_{p}=\frac{y-\alpha _{p}}{o_{\lambda }+d_{\lambda }}+\frac{d_{\lambda }y}{o_{\lambda }+d_{\lambda }} \end{aligned}$$
(12)

Examining this expression demonstrates the trade-offs faced by the platform. First, as expected, platform non-compliance is higher the further the policy location of the platform liability regime is from the platform’s preferred position.

While non-compliance is higher the greater the distance is between the platform’s ideal policy location and the location set by lawmakers, the penalty o exerts a downward pressure on this response. More importantly, the second term suggests that non-compliance is higher the more radical is the new policy, i.e. the further it is from the status quo. But again, this effect is dampened by the penalty imposed. The marginal cost of compliance, \(d_{\lambda }\), also has significant influence on the prevailing level of platform non-compliance. Inasmuch as

$$\begin{aligned} \frac{\delta }{\delta d_{\lambda }}\left[ \frac{d_{\lambda }y}{o+d_{\lambda }}\right] >0, \end{aligned}$$
(13)

non-compliance is higher if the marginal cost of compliance is higher.

Integrating over the whole distribution of platform non-compliance gives the aggregate h for any given distribution, among platforms, of ideal content moderation regimes. For any given distribution of \(\alpha\), i.e. \(f\left( \alpha \right)\), the aggregate level of non-compliance across all platforms is given by

$$\begin{aligned} {\bar{h}}=\int h_{p}f\left( \alpha \right) d\alpha . \end{aligned}$$
(14)

Again, as integration is a linear operation, the various parameters affect \({\bar{h}}\) the same way as discussed above for individual \(h_{p}\).

2.3 Lawmakers’ optimal content moderation policy

Given users’ anticipated aggregate level of incivility \({\bar{k}}\) and platforms’ anticipated aggregate level of non-compliance \({\bar{h}}\), legislators respond by minimizing the cost of institutional decay generated by incivility, i.e. maximizing the objective function

$$\begin{aligned} v\left( \lambda ,{\bar{k}},{\bar{h}}\right) =-\psi _{\lambda }\left| y\right| - \epsilon _{\lambda }\left| {\bar{k}},{\bar{h}}\right| +\delta V\left( y-{\bar{k}}\right) \end{aligned}$$
(15)

The first term represents the cost of enacting a new policy of holding platforms liable for user speech, which is directly proportional to the absolute distance from the status quo. By definition, any move in y must be accompanied by a switch from immunity to liability. As a result, enactment costs under immunity are 0.Footnote 9 The model can account for three possibilities when lawmakers implement platform liability for user speech. The policy location can remain the same, the policy location can increase (because, for instance, platform liability is coupled with a new rule against fake news as in Singapore’s Protection from Online Falsehoods and Manipulation Bill), or the policy location can decrease as a result of, for example, a compromise or bargain struck between platforms and lawmakers.

The second term represents enforcement costs, which depend upon aggregate non-compliance \({\bar{h}}\) and aggregate incivility \({\bar{k}}\)

$$\begin{aligned} \epsilon _{\lambda }\left| {\bar{h}},{\bar{k}}\right| = r_{\lambda }{\bar{h}}+w_{\lambda }{\bar{k}} \end{aligned}$$
(16)

By assumption, \(r_{1}>r_{0}\), since under a liability regime, platforms will be liable for failing to sufficiently moderate user speech in addition to existing rules that hold platforms liable for other reasons. On the other hand, \(w_{1}<w_{0}\) since platforms engage in greater moderation under a regime that holds them liable for user speech.

Finally, the third term represents the value of revealed information from making an optimal decision in period 2. This term is a function of aggregate user compliance, i.e. civility.

Recall that lawmakers are faced with the choice between maintaining status quo immunity and implementing a platform liability regime.They will maintain the status quo if

$$\begin{aligned} -\psi _{1}\left| y\right| -r_{1}{\bar{h}}-w_{1}{\bar{k}}+\delta V\left( y-{\bar{k}}\right) >-r_{0}{\bar{h}}-w_{0}{\bar{k}}+\delta V\left( y-{\bar{k}}\right) \end{aligned}$$
(17)

3 Comparing welfare

Proposition 1

Lawmakers prefer liability (immunity) as platform liability penalties increase (decrease); user penalties decrease (increase); platform marginal compliance costs decrease (increase); and user marginal compliance costs increase (decrease).

To reduce incivility and institutional decay, lawmakers face the tradeoff between incurring platform liability enactment and enforcement costs versus the costs of enforcing existing law. Enactment costs depend upon the policy location of the speech screening policy. Enforcement costs depend upon platform non-compliance \({\bar{h}}\) and user incivility \({\bar{k}}\). Recall that aggregate platform non-compliance depends upon the distance of the policy location from the status quo, normalized to be 0; the distance of the policy location from the platforms’ ideal positions \(y-\alpha\), the penalties imposed for non-compliance o, and the marginal costs of compliance d. User incivility depends upon the distance of the policy location from the status quo; the distance of the policy location from the users’ ideal policy positions \(y-\sigma\); the penalties imposed for incivility s; and the marginal costs of compliance b.

Consider platform marginal compliance costs d. Decreased (increased) platform marginal compliance costs d decrease (increase) aggregate platform nConsider platform penalties o. Increases (decreases) in o decrease (increase) \({\bar{h}}\) for \(\lambda =0,1\). However, under a liability regime, platforms will be liable for failing to sufficiently moderate user speech in addition to existing rules that hold platforms liable for other reasons, and \(r_{1}>r_{0}\) by assumption. Thus, for any o, platform enforcement costs under liability are greater than platform enforcement costs under immunity. As a result, lawmakers prefer liability (immunity) when o is increasing (decreasing) holding other factors constant.Footnote 10 Consider user penalties s. Decreases (increases) in s increase (decrease) \({\bar{k}}\) for \(\lambda =0,1\). However, \(w_{1}<w_{0}\) by assumption, since platforms engage in greater moderation under a regime that holds them liable for user speech. Thus, for any s, user enforcement costs under liability are less than user enforcement costs under immunity. As a result, lawmakers prefer liability (immunity) when s is decreasing (increasing) holding other factors constant.

Table 1 Summary results

Consider platform marginal compliance costs d. Decreased (increased) platform marginal compliance costs d decrease (increase) aggregate platform non-compliance \({\bar{h}}\). Given that \(r_{1}>r_{0}\), lawmakers prefer liability (immunity) for decreasing (increasing) d holding other factors constant. Consider user marginal compliance costs b. Increased (decreased) user marginal compliance costs b increase (decrease) aggregate incivility \({\bar{k}}\). As \({\bar{k}}\) increases (decreases), user enforcement becomes more costly under immunity (liability) given that \(w_{1}<w_{0}\), and lawmakers prefer liability (immunity) holding other factors constant.

Proposition 2

When moving from immunity to liability, lawmakers reduce enforcement costs the less radical is any change to the speech screening policy, and the closer any change to the speech screening policy is to the platforms’ and users’ ideal locations.

By definition, lawmakers only change the location of the speech screening policy when moving from immunity to liability. Consider first, the radicalness of the change in a speech screening policy. The less radical the change, the closer the policy remains to the status quo, and as a result, less compliance costs are incurred by platforms and users, which implies smaller \({\bar{h}}\) and \({\bar{k}}\). Smaller \({\bar{h}}\) and \({\bar{k}}\) implies lower platform and user enforcement costs. Also, a less radical change implies smaller enactment costs \(\psi _{1}\left| y\right|\), given that those costs are directly proportional to the status quo. As a result, enactment and enforcement costs under platform liability decrease as the distance between the policy location y and the status quo decreases.

Consider second, the distance of the speech screening policy from the platforms’ and users’ ideal locations. As the aggregate distances \(y-\alpha _{p}\) and \(y-\sigma _{i}\) decrease, platform non-compliance and user incivility decreases, which in turn, reduce enforcement costs under the new policy location of the platform liability regime.

Corollary 1

Inasmuch as implementation of a platform liability regime or a move to a new speech screening policy is unconstitutional or prohibitively costly, lawmakers prefer status quo immunity, but platforms are free to set strong content moderation policies consistent with existing law.

When implementation of a new content moderation regime is unconstitutional or prohibitively costly, \(\psi _{1}\left| y\right| +r_{1}{\bar{h}}+w_{1}{\bar{k}}>r_{0}{\bar{h}}+w_{0}{\bar{k}}\). Lawmakers continue status quo immunity irrespective of platform moderation policies \(h_{p}\left( \theta _{p}\right)\) and their influence on \({\bar{h}}\), and resultant impact on r.

Proposition 3

Given a constitutionally fixed speech screening policy, lawmakers prefer platform immunity, even if user incivility is increasing, if platform enforcement costs savings under immunity exceed user enforcement cost savings under liability.

Under status quo immunity, lawmakers are faced with the decision of implementing a liability regime given enactment costs \(\psi \left| y\right|\), platform liability enforcement costs \(r{\bar{h}}\), and user costs \(w{\bar{k}}\). If the speech screening policy is constitutionally fixed and remains unchanged after a move from platform immunity to liability, then enactment costs \(\psi \left| y\right|\) are 0, and lawmakers are faced with \(r_{1}{\bar{h}}+w_{1}{\bar{k}}>r_{0}{\bar{h}}+w_{0}{\bar{k}}\). Recall that platform enforcement costs under liability \(r_{1}\) are greater than platform enforcement costs under immunity \(r_{0}\) since lawmakers must enforce platform liability rules related to user speech as well as platform liability rules unrelated to user speech in addition. However, user enforcement costs under liability \(w_{1}\) are less than user enforcement costs under immunity \(w_{0}\) since platforms engage in greater moderation under liability. Lawmakers, therefore, prefer immunity when platform enforcement cost savings \(\left( r_{0}-r_{1}\right) {\bar{h}}\) under immunity exceed user enforcement cost savings \(\left( w_{0}-w_{1}\right) {\bar{k}}\) under liability for any level of incivility \({\bar{k}}\).

4 Conclusion

In many jurisdictions, platforms are immune from liability for user speech-acts. However, lawmakers in those jurisdictions may be concerned with platform civility and its impact on institutional health. In the model, lawmakers are faced with the decision to reverse a policy of platform immunity versus implementing a platform liability regime. Lawmakers prefer continued platform immunity if the costs of implementing a platform liability regime are greater than the costs of enforcing status quo law. In addition, inasmuch as implementation of a platform liability regime is coupled with new speech restrictions that are unconstitutional or prohibitively costly, lawmakers prefer immunity, but platforms are free to set strong content moderation policies consistent with existing law.