Twitter took unprecedented action recently on two tweets from President Donald Trump.
The first was adding a fact-check notification to a tweet about mail-in voting.
And the second was to place a warning in front of a tweet that insinuated looters would be shot amidst the protests over the death of George Floyd that the platform said violated its policies about glorifying violence.
The fact-check of the first tweet sparked ire from the president.
That was only increased by the warning label placed on the second tweet.
While Twitter has intervened into the president's posts, Facebook hasn't.
Both messages appeared unaltered on the president's Facebook page.
We have a different policy I think than Twitter on this.
You know, I just believe strongly that Facebook shouldn't be the arbiter of truth of everything that people say online.
Twitter CEO Jack Dorsey has said the platform will continue to fact-check information about elections.
But how did Facebook and Twitter come to take different views on moderating the president on their platforms?
It's the latest question in an ongoing debate about the responsibility tech companies have in policing speech online.
You can think of online content moderation a bit like book publishing.
If a book with objectionable content like hate speech was written, the publisher would be responsible for editing that stuff out before it shipped to book stores.
If that stuff somehow made it into the book, and that book somehow made it onto shelves, the bookstore couldn't be held responsible for what was in the book since it had no say in its creation.
Unlike say a newspaper or a traditional publisher, the platforms operate completely differently with the idea that they're sort of providing the place to put this stuff for individuals to publish themselves.
In the 1990s, many early online forums like CompuServe chose to not actively moderate content on their sites, while other sites like Prodigy did.
A series of court rulings determined that sites that actively moderated their content were more like publishers and therefore, could be held liable for defamatory content.
And this was viewed as kind of being a thing that was just going to greatly, sort of, slow down the development of the internet in general, and really sort of threaten the ability to build a functional ecosystem.
To address this issue, Congress included a provision in the 1996 Communications Decency Act called Section 230.
So, Section 230 allowed them to be focused on being platforms and not on being publishers.
They could intervene when they wanted to for the good of their platform.
But they didn't have any responsibility beyond sort of making sure that they weren't becoming sort of a cesspool of illegal behavior.
But this prioritization of growth over moderation came with consequences.
In 2016, social media disinformation campaigns during the election put pressure on Facebook and Twitter to step up their moderation efforts.
We were too slow to spot this and too slow to act.
Propaganda through bots and human coordination, misinformation campaigns, and divisive filter bubbles.
That's not a healthy public square.
There was a sense that this was getting to be pre-lawless and also that these ecosystems were extremely vulnerable to manipulation.
Facebook and Twitter both ruled out fact-checking operations to combat misinformation.
Facebook's had sort of an independent fact-checking program.
Twitter's done a bit more stuff internally.
But the general idea is that neither of the platforms want to do much regulation of speech, particularly when it comes to things like censoring the president of the United States.
But both internally and externally, tech companies have faced mounting pressure to confront what the president posts on their platforms.
None more so than Twitter.
Trump has been a very aggressive user of Twitter.
It's kind of his native medium.
And Twitter recently took a step.
Twitter's decision to add its fact-check label to the president's May 26th tweet was the first time the platform had fact-checked information outside of the coronavirus pandemic.
Two days after the tweet, the president signed an executive order seeking to limit the broad legal protections tech companies have under Section 230.
Currently, social media giants like Twitter receive an unprecedented liability shield based on the theory that they're a neutral platform, which they're not.
Section 230 doesn't have anything to do with political bias.
In fact, the whole point of 230 was to allow them to intervene without fear of legal repercussion.
So, this isn't kind of something that is a natural outgrowth of the law.
The morning after the executive order was signed, Twitter placed a warning message on the president's tweet about looters, while Facebook let the post appear unaltered.
Mark Zuckerberg said that while he personally had a visceral negative reaction to the post, it didn't violate Facebook's policies.
When Twitter sort of stepped in to this thing, Facebook ducked.
Facebook has, for years, tried to stay out of regulating political speech, in particular political ads.
They have sort of resisted calls from what might have been natural allies to restrict this stuff.
Twitter has also applied another warning label in front of a tweet by Congressman Matt Gaetz, saying it also glorified violence.
Twitter does seem to be making a decision that there are some limits it wants to put within U.S. politics.
And that's something that has been kind of a third rail for American technology companies.
Zuckerberg said Facebook will review existing policies on how it handles content related to civil unrest or violence.
Where exactly the president's order and attempts at legal restrictions on the platforms and on their moderation go, is unclear.
This certainly seems like it's gonna draw them deeper into politics, and that's something that, you know, Mark Zuckerberg I think has been desperately trying to avoid.