The concept of a “common carrier” is one that has been applied to transport companies for centuries—the first such case on record in English common law dates back to 1348—although in more recent times the concept has been applied to telephone companies, Internet service providers and others that transport electronic content, not just physical goods.
Should this concept apply to Twitter, Facebook and other social media companies that provide transport of information? In other words, should social media companies simply transport all content sent by their users without applying any sort of filtering to this content to prevent transport of things it determines to be offensive, illegal or otherwise not in their best interests?
At issue here is the significant number of tweets from a small handful of Twitter account holders that contain inflammatory content or direct threats against others. As just a few recent examples, there have been a large number of anti-Semitic posts from French-speaking twitter users, Mitt Romney has received a significant number of death threats via Twitter, and numerous athletes have received death threats after making mistakes in big games, not to mention the enormous problem with cyberbullying that victimizes large numbers of young people.
The technology or practice of censoring tweets is not the issue: Twitter can and does censor content already on a country-by-country basis. The much bigger issue is should Twitter filter its content to prevent this type of content from being transported on its network? A common carrier generally cannot do so unless the service is being used for an illegal purpose [Movietime Inc v. NY Telephone Co., 277 App Div 1057, 101 NY Supp.2d 71 (2d Dept 1950)]. However, a common carrier must be certain of the illegal activity and have evidence that its services are being used for illegal purposes [Nadel v NY Tel., 170 NYS2d 95 (1957)] before it is permitted to deny access to its network.