Capitalisn't

Regulating Facebook and Google Pt 2: Politics

Episode Summary

In part two of our series investigating how digital platforms like Facebook and Google should be regulated, Kate and Luigi dissect the ways these companies interact with our political system by speaking with Nolan McCarty, Susan Dod Brown Professor of Politics and Public Affairs at Princeton University.

Episode Notes

In part two of our series investigating how digital platforms like Facebook and Google should be regulated, Kate and Luigi dissect the ways these companies interact with our political system by speaking with Nolan McCarty, Susan Dod Brown Professor of Politics and Public Affairs at Princeton University.

Episode Transcription

Luigi: Last week, the Stigler Center at the University of Chicago organized a conference scrutinizing the digital platforms, especially Facebook and Google.

Kate: On our last episode, we spoke with one of the conference leaders, Fiona Scott Morton, about the market structure of these companies.

Luigi: On this episode, we’re going to investigate these companies from a political economy perspective.

Kate: From Georgetown University, I’m Kate Waldock.

Luigi: And from the University of Chicago, this is Luigi Zingales.

Kate: You’re listening to Capitalisn’t, a podcast about what’s working in capitalism today.

Luigi: And, most importantly, what isn’t.

Kate: In today’s episode, we have the honor of having as a guest Nolan McCarty, Susan Dod Brown Professor of Politics and Public Affairs at the Woodrow Wilson School at Princeton University. An author of very many successful books, the last of which is Polarized America: The Dance of Ideology and Unequal Riches. Nolan was the head of the political economy subcommittee. Welcome, Nolan.

Nolan McCartyGreat. Thank you. Glad to be here.

Kate: Can you start us off by explaining why a political scientist like you is interested in companies like Facebook and Google?

Nolan McCartyWell, after the outcome of the 2016 election and the Brexit referendum in the UK, there’s been a lot of scrutiny as to how these platforms were used to manipulate elections, both in the US and the UK. But a second reason, and one that I think is much more important, is that these platforms and the corporations that run them have themselves become very important political actors. And so what our subcommittee tried to do was try to bring those two issues together and analyze them jointly.

Luigi: Would you mind defining for us what it means to be a powerful political actor, but more importantly, why are Facebook and Google so special in this dimension?

Nolan McCartyWell, first of all, we believe that large corporations are just inherently politically powerful. They have lots of economic resources. Those economic resources are things that politicians tend to defer to. Politicians, to get reelected, need economic growth and good performance. 

Secondly, large corporations are often quite politically active. This is certainly true of the major digital media platforms. They’re all engaged in making large campaign contributions. They engage in lobbying. In fact, they’re some of the biggest, most active lobbyists in the country, and they do so not on issues related directly to digital media, but on a wide variety of issues. If you think about a company like Amazon, they’re not only lobbying on things related to digital technology but related to things on land use, consumer product safety, a wide variety of things. And so, these have become quite dominant actors simply because they’re powerful corporations.

Kate: I guess it makes sense for Amazon to care about issues like land use and consumer product safety, because they have big warehouses and they sell a lot of consumer products. But it seems like Facebook and Google also touched these areas even though they’re more purely digital. Why would Facebook and Google care about these sorts of issues?

Nolan McCartyWell, I mean, they’re involved in selling advertising to a large degree. And so, anything that might affect their business models related to advertising would also come under their umbrella. They too have very large agendas. One’s related to international trade, copyright protection, privacy. A very large number of issues are covered by these firms. In fact, if you just look at statistics provided by Opensecrets.org, among corporate lobbyists, Alphabet, which is the parent company of Google, Amazon and Facebook were the first-, fifth- and eighth-most prolific spenders on lobbying.

A second issue is that they play a role as kind of a quasi-media outlet. So, they have certain First Amendment claims that they might be able to exploit to avoid regulating the content on their platforms. They also are involved in very complex activities — artificial intelligence, data — all of which is fairly nontransparent and opaque, which also makes it less likely to face public scrutiny and accountability. The third is that they’re highly connected. Unlike most corporations, they have a direct connection with their users and consumers on a daily basis. While Coca-Cola has millions of consumers, their ability to communicate, mobilize, and engage those consumers on political issues is somewhat limited.

But Facebook and Google can routinely utilize their platforms to mobilize them on a set of policy questions that other corporate groups can’t. In a lot of ways, these platform companies are less like corporate lobbyists than like big, major membership organizations such as the National Rifle Association or the American Association of Retired Persons. And, finally, these companies get a certain amount of latitude from policymakers due to the emergence of economic nationalism and the race to be on top in artificial intelligence and data science, part of the national economic strategy to not fall too far behind in technology and artificial intelligence, vis-à-vis China in particular. Those are claims that these firms often make when lobbying in pursuit of their corporate goals.

Luigi: If I get this right now, what you’re saying is Facebook and Google sum up the power that normally rests in the hands of the NRA, CNN, Boeing as for the national champion, and JP Morgan in terms of money. This is the first time in human history that corporations have all these powers together. Am I right?

Nolan McCarty: I can’t say for all of history. Perhaps the East India Company in Britain or something like that, maybe. But, it’s hard for me to think in, at least in American corporate history, a company or set of companies that have all of these attributes rolled into one.

Kate: All of these powers combined made me think, if I were one of the founding fathers, and I could foresee that digital platforms would have this type of power in the future, would I have cared at the time? Is there anything that I should have been concerned about? And, if so, what could I have done about it?

Nolan McCartyI mean, certainly it’s the case that I believe the founders were quite concerned with concentrations of power, period. But it’s pretty clear that, at least in the modern world, the real concerns about heavily concentrated power are less about democratic governance and much more about corporations. To draw an analogy that’s discussed in our report, think about the printing press. The printing press is a technological innovation, which had a profound political impact, but its scale was small. One can’t monopolize the printing press, so it didn’t concentrate political power. It deconcentrated it. Here we have the opposite situation, where we have new technology with a lot of potential to disrupt politics, to dictate the ways in which elections and political discussions take place. Yet at the same time, a power is concentrated into a few hands such that the analogy might be one in which you have a printing press, but with a single entity with the ability to turn off and turn on that press at will. That’s the concern about the concentration of power.

Luigi: But aren’t we overreacting a bit, because in 2010, 2011, everybody was proclaiming how great social media were, how equalizing. After all, it’s true that the printing press was decentralized, but it’s also true that at a pretty large fixed cost, and so the ordinary citizen could not have access to his own or her own printing press. Now, everybody has access to Facebook. Everybody has access to Google. Everybody has access to Twitter. They can participate in the political arena in a way that they couldn’t participate before. Why is this necessarily bad?

Nolan McCartyI don’t think those aspects are bad. The political effects of social media platforms are both positive and negative. Social media has the potential to be a tool for mobilizing. This has been especially important in authoritarian countries where activists are trying to undertake democratic reforms. It’s become a tool for political engagement and, as Luigi suggested, has also broken down the monopolies that the powerful have over information. Social media allows individual citizens to communicate with one another with very little interference from authorities. But there’s dark sides to political mobilization, engagement, and the breakdown of informational gatekeepers. People can be mobilized in hate campaigns. They can be mobilized to engage in ethnic conflict. They can engage in hate speech and rhetoric. The lack of informational gatekeepers has meant a deterioration in some of the quality of the information that’s available on the Internet. When we’re concerned about misinformation and bad information, that goes hand in hand with this democratization of access.

Luigi: One of the elements to identify is that certainly Facebook and Google, I don’t know about the other platforms, but Facebook and Google look like media companies. They fight very aggressively against this definition, but at the end of the day, Facebook edits the news that they feed in our profile. In fact, it edits them for a very specific purpose, to maximize the time and attention we spend on the platform. Google does the same with YouTube. This objective is reached by feeding customers more and more extreme news. In feeding this news, Facebook and Google take no responsibility for the content of the news. They’re happy just to take the profits. Ironically, they’re able to do so thanks to part of a law that was approved in 1996, The Telecommunications Act. In Section 230 of that act, there is a special exemption for digital platforms about their responsibility as editors. What do you think about this particular law, and what can be done to fix this problem?

Nolan McCartyYeah. This is one of the most controversial issues that we encountered, was platform liability. They act a lot like media companies, so they should be subject perhaps to defamation laws and all of the sorts of liabilities that go with news publication. I think the concern, however, with eliminating this liability comes from the fact that the social media landscape is so concentrated that if one were to remove their liability and that were to have a conservative effect in the moderation policies of the company, such that they would throw a lot of speech offline, close off their platforms to users who engaged in allegedly hate speech or manipulation or misinformation, there wouldn’t be the competitive forces to get those moderation policies right. Social media platforms may be just too aggressive in moderating speech. 

If, say, Facebook and Google become too vigilant in terms of policing speech, then that speech will just move to less public, less accountable forums such as certain chat rooms on Reddit, et cetera. In the end, our committee didn’t make a strong recommendation with respect to changing liability protections because of these competing concerns and our lack of clear guidance about whether or not Google and Facebook will get the moderation policies right in the face of liability regime.

Luigi: Nolan, I’m very sympathetic to your concerns that digital platforms are monopolies and as such not accountable. I’m concerned about giving them too much editorial power. However, remember Section 230 simply removes the liability. It does not remove the editorial power. They still remain in power to edit. They edit what kind of advertising they release. At some point, they decided, we don’t advertise cryptocurrency anymore. Now, I don’t particularly care for this topic, but the fact that Facebook and Google together can become a regulatory agency is a bit scary. They can decide what is appropriate or not. At some point, they decided that using the logo of Facebook in Facebook ads was bad. As a result, the ad of Elizabeth Warren was edited out. So, it’s not that these platforms don’t have editorial power, it’s that they have editorial power with no accountability and no liability. That seems like the worst possible outcome.

Nolan McCartyI don’t disagree. I think the debate centers upon those who want the maximal amount of free expression and worry that removing liability will be much more restrictive in those people who think that most of what goes on in Facebook and Google should receive more moderation. At the end of the day, one’s views about removing Section 230 liability protection really depends on whether or not you think that there’s too little free expression or too much free expression. You know, the point about monopoly simply is that we would only have the liability and the courts to define proper moderation. You wouldn’t have the competitive pressures to allow certain platforms to moderate more intensely and some less intensely and let consumers decide which type of platform they want to spend their time engaging with.

Luigi: Yeah, but if I can follow up on this, think about the famous or infamous case where Facebook basically keeps feeding news about abuses of Rohingyas in Myanmar, that fed part of the massacre that took place in that country. Why don’t you want them to be responsible for feeding this news to many, many more people and, in a sense, cultivating some hate that eventually brought them to ethnic cleansing or rioting and a lot of people being dead? Why don’t you want to put some responsibility on them in editing this stuff out?

Nolan McCartyI’m more or less reporting a lack of consensus on a committee. In some sense, I’m answering a question about how others on the committee felt about this issue rather than my own personal views. I should clarify that. I guess the response would simply be that we don’t want to allow certain events to create bad regulatory regimes, which will exacerbate other types of problems. So getting the balance right is very important on these things. But, as you point out, there have been some terrible things that happened with respect to social media. I think the consensus at the committee was that changing the liability laws would not do it, but perhaps other forms of government supervision, regulation, increased transparency might have a much more direct effect without hampering free expression on the platforms in the same way.

Kate: If the jury or the committee is out on Section 230, could you tell us more about these other proposed forms of regulation?

Nolan McCartyIn some sense, we concurred with other working groups, which was that we felt that purely self-regulation of the media platforms was not going to work. There’s not a great track record of self-regulation in this industry when it comes to privacy and other areas. We endorsed the idea of creating a digital regulator, a digital authority that would be somewhat empowered to supervise and write rules, facilitate transparency and disclosure within the social media environment.

Designing a regulator in such a way that we could balance the concerns that a powerful industry such as social media platforms would have undue influence among the digital regulator and perhaps not regulate enough, not go far enough, with the notion that the digital authority, digital regulator would have to be democratically accountable. We put forward some principles for designing such an agency. But the main thing that we stressed, and I think this is consistent with the other groups in the conference, was that the digital authority should collect data in real time, should make that data available both to researchers within the government and outside the government, in ways that we can actually better understand how social media platforms generate the political consequences that they do.

Kate: The overarching theme of this conference is antitrust in digital platforms. We’ve had as a guest on one of our previous episodes Lina Khan, who introduced us to this idea of New Brandeis and the New Brandeisian approach to antitrust, which claims that’s part of its mission. The idea that antitrust law as well as enforcers should consider political economy concerns in addition to just the consumer welfare, the traditional economic approach to what makes a monopoly. But, in suggesting that we should create this digital authority, you’re necessarily partitioning the political concerns from the traditional economic concerns. Does this mean that you disagree with the New Brandeisians?

Nolan McCartyNo, I don’t think so. In fact, one of the things we’re open to saying is that one of the things the digital authority could do is to conduct analyses of the political implications of social media mergers and either provide that information to the traditional antitrust regulator or, perhaps more ambitiously, could form the basis of a dual review of mergers in which the digital authority might evaluate mergers in terms of their political and social consequences and have to sign off on those. The difficulty we face on the political side is that the research on the political impacts of social media is relatively nascent. We don’t really know as much as we should about the impacts of Facebook’s platform and its policies on elections and polarization, manipulation.

I think the first step really is to have the digital authority open the data vaults, make them available to independent researchers, so independent researchers can study these questions, with the goal of perhaps coming up with quantifiable standards for when we might expect additional concentration to have negative political consequences. One of the reasons that antitrust seems dominated by economic concerns is that they’re quantifiable. I mean, not perfectly, but at least data can be brought to the table.

On the political side, again, we’re entirely sympathetic with the notion that antitrust should serve these political and social considerations as well as economic ones. But we lack the methodologies and the data to evaluate those. This should be something that the digital authority should look into. Use its research and facilitate independent research toward the goal of being able to study the question of the extent to which social media concentration negatively impacts political outcomes. Then, once we know that answer, then it can be brought into public policy more readily.

Luigi: Nolan, you’re right that we don’t have good tools to analyze, if you want, the political concentration. But, to be fair, the media subcommittee did look at the aspect of media concentration, and there, there is more of a tradition. In fact, in the UK, when they look at concentration of media, the antitrust authority brought in some form of citizen welfare, as an alternative to consumer welfare, in thinking about the effect of a merger, and said that even if the merger does not have an impact on price and quality, but it does have an impact on the diversity of information or the potential diversity of information sources, I think that must be considered anticompetitive. Do you have a view on this citizen welfare as an alternative or as a complement, I would say, to consumer welfare as an antitrust criterion?

Nolan McCartyI’m certainly very sympathetic. If we knew for certain that a particular merger would lead to say more misinformation, more manipulation of elections and deteriorate the quality of democratic governance, that certainly is a merger that should at least receive extra scrutiny. So, the principle is one I subscribe to. The issue, really, is that, say, unlike newspapers and perhaps broadcast journalism, we don’t really have a good sense of how mergers would affect audience and diversity of viewpoints, because we don’t really have a good evidence on which Facebook posts receive the most attention and basic information about what is the diversity of information on Facebook and how that might be affected, say, by Facebook’s acquisition of Instagram, for example.

We’ve kind of lacked the information about audience and how audience might be affected by mergers that we would have with newspapers and broadcast media. What I guess I’m proposing is that we get the data, we study the question, and if it can be shown that mergers or acquisitions like Facebook’s acquisition of Instagram lead to less diversity of options in terms of political information, or lead to other bad political outcomes, that those mergers should receive extra scrutiny, and those considerations should be taken as seriously as the traditional economic ones.

Luigi: It doesn’t look like we have to wait for a long time to figure out that the acquisition of WhatsApp or Instagram by Facebook did increase significantly the monopoly power of Facebook in social media, and as such should have been blocked or maybe now should be reverted.

Nolan McCartyYeah. I actually, in my own personal view, I don’t disagree with that at all. It’s especially telling that many of the implicit promises Facebook made to keep to run Instagram separately and to not destroy the niche of WhatsApp, the encryption niche, seems to have gone by the wayside by both the decision to run Instagram as part of Facebook and by Facebook’s decision to come up with an encrypted version of Messenger, which would make WhatsApp obsolete. I think there’s little doubt that those acquisitions were anticompetitive, both in sort of economic ways but also important political ways, by depriving citizens of alternative venues to communicate with one another. It’s just simply that we don’t have the kind of hard quantitative evidence on the magnitude of those effects that I think we would require to be successful in blocking those mergers on political grounds.

Kate: To this point about hard quantitative evidence, though, do you necessarily need to have hard quantitative evidence about the potential problems that, let’s say, a digital platform could bring up in order to properly regulate it? And one thing that comes to mind is, should we be concerned about black-swan-type events where, let’s hope this doesn’t happen, but the CEO of a large digital platform that has its fingers in every single country, all of a sudden does something to like incite war, or engages in some sort of terrorist attack because the CEO feels strongly about that? We’re never going to have any hard quantitative evidence on these types of risks, and these types of risks might be minuscule. And yet, I think that from a policy perspective, it’s something that we should consider.

Nolan McCartyAbsolutely. Actually, that’s a very good point that I don’t disagree with. But let me take another issue. One of the concerns about social media is that it’s polarizing. The reason why people argue it’s polarizing is it allows users to only engage in content that confirms their preexisting biases. We can get situations in which clusters of users group themselves into liberal online communities and conservative online communities and communities associated with this group or that group. From an individual perspective, that’s totally rational, and individuals might derive some benefit from being able to identify likeminded people, likeminded information, and engage with ideas that they more or less agree with. But from a social perspective, that might be destructive. So, the question is, before we regulate, we ought to really know how socially destructive this behavior is.

If it turns out that it’s not much of a problem, that it doesn’t have hard, observable manifestations, then maybe we should allow people the free will to choose who to interact with and allow platforms to design themselves in such a way that those people can find each other. If we do decide that such things are destructive, then maybe we do want to force platforms to develop architectures that force people to engage with people that they disagree with. I guess my view is before we intervene in something as basic as the freedom of association online, we really ought to know how big the social consequences are.

Luigi: One thing is to restrict association online, which would be a very strong intervention. Another thing is to limit the power of Facebook and Google to feed you the most extreme stuff to maximize their profits. To some extent, what is new, because people have sorted in ideological communities since the beginning of humankind? What is new? One thing is the scale, of course. But the other part is how much these communities are fed with the worst and worst because this keeps them attached to their smartphones.

Nolan McCartyYeah, I mean to push back, though, a little bit, we also need to know whether or not Google and Facebook are doing this because presumably they have some sense of this is exactly what users want and will keep them engaged. So, I still think we need to know something empirical about the consequences of those platform designs before we say that Google and Facebook can’t provide a service that they believe that their users want. You know, I agree, it’s likely to be the case that we don’t want YouTube to feed people more and more extreme videos just to keep them engaged. But I think as a scholar, we have a responsibility not to leap to those conclusions but to document them as well as we can empirically, to justify regulatory actions that prevent YouTube from engaging in those activities.

Luigi: Yeah. Unless you think that this stuff is addictive because when cigarette manufacturers were producing cigarettes that were more and more addictive, you didn’t need a lot of thought about putting a regulation in place.

Nolan McCartyThat’s a great analogy that I want to bring up. One of the things that we focus on, I mentioned it earlier, is that these platforms do an awful lot of research internally about their algorithms and how those algorithms influence people to remain engaged and presumably have data that can tell us the extent to which this is something that users want or something that users get addicted to. In exactly the same way the tobacco companies did tons of internal research on exactly the same questions.

One of the things that I think a digital regulator might want to know is exactly how these companies came to the conclusions that they came to develop those policies. Just as we eventually learned a lot from seeing the internal research of tobacco companies, I think we might learn a lot from the internal research of the platform companies that could help us to adjudicate exactly whether or not these platforms are responding to legitimate consumer demands or just feeding addictions.

Kate: Nolan, I have a somewhat philosophical question for you, which is maybe a little outside the scope of the committee report, but let’s say that misinformation and hate speech and these sorts of issues like fake news, let’s say they weren’t necessarily a problem, but that companies like Facebook and Google have more information about people’s consumer patterns and what businesses exist in the economy and where people are driving and what infrastructure is crumbling. Let’s just say that we have companies that know much more about our society and about our economy than the government does. Is that in and of itself a problem?

Nolan McCartyI guess it depends somewhat how those companies would use it. However, given that the capacity of our government to solve problems is in some ways hindered by a lack of knowledge about those problems, it would probably be a good, responsible corporate citizenry to share much more of that information, obviously protecting privacy, but share much more of that information with the government. We’re in a unique situation where economic concentration, political concentration, and as you suggest, the concentration of information into a few entities makes them uniquely powerful, is something that we really ought to focus on. Because if we become more dependent on Facebook and Google and their private philanthropic efforts to solve problems than we do government, then the types of problems that will get solved will reflect those priorities and not democratically established priorities. In that sense, I think the scenario you lay out is in fact a problem.

Luigi: Nolan, thank you very much for being on the program. This has been incredibly useful. For people who want to know more, they can access the Stigler webpage, where they can find all the panels of the conference. There’s more than probably you can take.

Nolan McCartyThank you.