Noah Feldman on Constitutions, Content Regulation, and Boaty McBoatface



Feldman, who specializes in constitutional law, draws upon established political systems to tackle the emerging, ever-changing domain of the digital world. The man who advised constitutional processes in Iraq and Tunisia now wants to develop systems of governance for social media platforms.



{shortcode-4545e2aed87229f188801e3ce104a7614429443b}

In the office of Harvard Law School professor Noah R. Feldman ’92, oil paintings share the same space as an ergonomic monitor arm. The light-filled room synthesizes the traditional and the modern, placing relics of the past alongside technologies of the future.

Feldman, who specializes in constitutional law, draws upon established political systems to tackle the emerging, ever-changing domain of the digital world. He’s previously advised constitutional processes in Iraq and Tunisia. Now, he wants to develop systems of governance for social media platforms.

His foray into digital governance began with founding the Oversight Board, an independent body that makes decisions surrounding content moderation and freedom of expression on Meta’s platforms, which includes Facebook and Instagram. According to the Oversight Board’s charter, Meta provides funding and appoints trustees while making a commitment to decisions made by the group. The purpose of the Oversight Board is to “reduce the social harms that could be caused by social media and, in their place, promote a greater degree of social good,” Feldman says.

In the context of social media, “harm” is a loaded term. “We’re not going to all agree on what’s harmful,” Feldman says. He lists examples of what most social media platforms label as harmful content: true threats, the promotion of violence, and discriminatory speech. Other categories are less clear-cut.

“There’s genuine disagreement about whether sexual content should be conceptualized as harmful or not,” Feldman says. “Is there something harmful about sexual content? And if there isn’t something harmful about it, should there still be reasons to restrict it?”

“There has to be a conversational process where we collectively try to figure out what we think is harmful,” Feldman says. He explains that tech companies often define harm in conversation with users, social organizations, and the media.

Even if social media platforms are able to regulate content that is clearly designated as harmful, governing bodies for these platforms would still have to contend with what Feldman calls “humans’ creative capacity to try to get around any rules you set for them.” Feldman discusses “pro-ana” communities: online subcultures that promote anorexia, which has the highest fatality rate of any psychiatric disorder. Although its most overt content is censored by many social media platforms, the pro-ana movement encodes its messaging in subtler, more visual language, masking itself in the aesthetics of art photography and designer fashion.

In a case like this, content moderation becomes a problem of figuring out where to draw boundaries. “If a platform said you can’t talk about the body, or aesthetics at all on my platform, they would eliminate a lot of pro-ana speech,” Feldman says. “But they would draw it so strongly that they would also eliminate people saying you should be happy with your body.”

“If you wanted to make sure that nobody died in a car crash, you could lower the speed limit to five miles per hour,” Feldman analogizes. “Nobody would ever die in a car crash. But you would also then not enable people to go fast for all the purposes for which people want to go fast.”

To Feldman, the key considerations when designing regulatory systems in tech parallel the considerations when crafting legal systems. He explains that separation of powers — a key idea in constitutional government — also underlies the concept of the Oversight Board. “The final decision on content should not be made by the platform itself,” Feldman says.

Another concept he sees as integral to both legal and technological systems is requiring all decision makers to explain why they reached the conclusions that they did. According to Feldman, this idea, which is part of the due process of law, heavily inspired the Oversight Board.

“When you post something and it gets taken down, it’s a human instinct to want to be able to say, ‘Hey, that wasn’t right.’ And it’s very hard to do that at scale, but giving people the opportunity to appeal is one way for people to have the opportunity to be heard,” Feldman says. He added that allowing people to express their opinions is also an important part of constitutional governments.

One aspect of constitutional government that the Oversight Board and similar initiatives have yet to incorporate, Feldman points out, is allowing members of the public to vote on rules. He thinks that many platforms are considering this as a long-term idea; however, creating fair voting systems on the internet can be challenging because people may not take the voting seriously.

Feldman gives the example of Boaty McBoatface, a submersible operated by the United Kingdom.In 2016, the U.K.’s Natural Environment Resource Council opened up an internet vote to decide the name of a new research vessel. After a month of deliberation, the public settled on the name “Boaty McBoatface.” While the U.K. did not follow through in christening the vessel “Boaty McBoatface” — now called the RRS Sir David Attenborough — one of the underwater autonomous vehicles aboard the craft was given the name.

Eventually, though, Feldman believes that the challenges that come with internet voting can be solved, alluding to ongoing experiments to find a “stable” solution.

In regards to tech governance as a whole, Feldman advises caution; just as “no constitution is perfect,” the current “experiments” in regulating technology “can’t solve every problem.” Instead, Feldman says, the focus in tech governance should be on placing checks on power.

While Feldman once was active on Twitter — where he often engaged with content on Ancient Near Eastern languages — he no longer uses social media unless for professional reasons, quipping that he stays off social media at the behest of his teenage kids.

“I think part of it is, I really, really, really love IRL interaction. It’s my favorite thing in the world, and I’m one of those people for whom the social isolation that came with Covid felt like a cost,” he says. “I desperately missed real contact with my real students, with my real friends, and with the rest of the world.”

“At the same time, I recognize social media as fascinating and transformational,” he adds. “ I’m sure, just around the corner, there’s probably just the app for me.”

— Magazine writer Yasmeen A. Khan can be reached at yasmeenkhan@thecrimson.com.