Zuboff: No Provider Shall Be Treated

I’m going to step out of my lane for a second here, talk about Shoshana Zuboff’s The Age of Surveillance Capitalism. You’ll only see one or two articles from me on it – it’s a good read, but it’s not the sort of book I want to say much about. The Age of Surveillance Capitalism is an introduction to capitalism in the age of Google and Facebook. The main text runs to over 500 pages, with an additional 130 pages of notes and citations. That’s part of the reason I’m not going to say much about it – it’s already an introductory text. It collates a bunch of different sources and brings them together in an overarching narrative about what surveillance capitalism means and how it operates. It’s got big red ‘Start Here’ signs, if it’s a topic that you’re interested in. Pick it up and loot the bibliography.

In the meantime, there’s one brief moment that I wanted to touch on. In Chapter Four, ‘The Moat Around The Castle’, Zuboff discusses how in the lead-up to 9/11, Congress was moving towards extensive data security laws. Recommendations from the Federal Trade Commission included “clear and conspicuous notice of information practices; consumer choice over how personal information is used; access to all personal information, including rights to correct or delete; and enhanced security of personal information.” After 9/11, all that went out the window, with governments encouraging tech companies to collect and analyse our data (and then pass it on). The CIA even – I think this is something that everybody kinda expects, but it’s weird knowing the name – the CIA have their own venture firm, In-Q-Tel, whose job is to go round funding Silicon Valley projects so that the government gets the best new information technology.

Anyway, the thing that caught my eye was from a couple of years before all that. In 1996, the US government released the Telecommunications Act, apparently the first major overhaul of telecommunications law in sixty years. Within that, Section 230 of Title V of the Act held that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Platforms aren’t liable for the shit their users do. So, for instance, if I was going around saying defamatory things about Old Spice, Old Spice couldn’t turn around and sue WordPress for hosting my defamation. WordPress isn’t a ‘publisher’ of my stuff, in that sense. They’re different to like a newspaper or magazine, which would be liable – because they have editorial control over what’s published.

Now, obviously Section 230 has come under heavy fire in recent years. People are using various digital platforms to do increasingly bad things, and we all sort of want Facebook and Twitter and so on to take more responsibility for what’s done on their site. We know about the Russians interfering in the 2016 US elections with a fuckload of bots and propaganda on Facebook, we know about neo-Nazis who tweet, we know about terrorists live-streaming their terror attacks on Facebook. According to Section 230, Facebook and Twitter aren’t legally responsible for any of those things. But we kinda want them to be. And to be clear, there are already some areas where online platforms do have liability. There’s stuff around copyright that they’re obliged to do under the Digital Millennium Copyright Act, which is where you get the DMCA copyright strike system on Youtube. More recently, there are also exceptions under the 2018 FOSTA-SESTA acts, meaning that platforms can be liable for facilitating sex trafficking.

Despite these areas of liability (and a few others too), Section 230 is still causing problems. The Russians are out interfering with other people’s democratic processes, and the companies responsible for facilitating that interference aren’t legally obliged to stop it from happening. That’s a problem. But it wasn’t the intention when the law was designed. In her book, Zuboff notes two key cases that prompted the creation of Section 230. In one case, in 1991, CompuServe was found not liable for a defamatory post that some dickhead posted in one of their forums or something online. If it had been published in a newspaper, it would have been defamation, but because CompuServe didn’t review the post before it went up, because it’s just some guy using their forum, they didn’t have any editorial oversight, and so they were judged not liable. They were more like a distributor than a publisher. Then, in 1995, another company (Prodigy) had more or less the same situation – some dickhead used their forum and posted something defamatory about a third party, leading to Prodigy getting sued. However, Prodigy were actually found liable, on the grounds that they had community guidelines and moderators. They were clearly making an effort to monitor what was published in their forums, and they exercised editorial control by deleting offending posts – so they were considered publishers rather than distributors, and therefore were guilty of publishing the defamatory material. It was a worst of both worlds scenario. Websites could either try to moderate content, and get fucked every time something slipped past, or allow everything and avoid getting sued. Section 230 was intended to resolve that situation. It allowed websites to moderate their forums and shit without punishing them every time they got it wrong. No provider of a service would be treated as a publisher.

And in itself, that’s not a bad idea. There was a stupid legal situation, and the law was updated to make things more sensible. Fair enough. The problem is that things change. Motivations change. Previously, CompuServe and the other guys didn’t want neo-Nazis on their platforms, because it looked bad. There was a good faith assumption that they’d filter out the Nazis. But because Google and Facebook are built around tracking your online behaviour, they’re reluctant to exclude groups of people – reluctant to lose that data. They don’t have the same motivation around policing their forums. Consequently, we now have a world where neo-Nazis go around tweeting, and the companies responsible for distributing their neo-Nazi tweets have legal immunity because of a law that was intended to cover those distributors while they moderated the neo-Nazis off their platforms in the first place. Rather than accepting their social responsibility, the major surveillance tech companies run a hard free speech defence, drawing heavily on the First Amendment. We’re legally not liable for our neo-Nazi tweeters, and the law that lets them run amok on our site should not be changed because it would infringe on their free speech rights. Zuboff doesn’t mention this, but of course the Nazis have twigged on to the free speech thing too. We’re seeing Facebook and Nazis using the exact same rhetorical devices, which – don’t be dramatic, okay. That’s not to say that Zuckerberg or whatever is a Nazi. He’s just a businessman selling ads and showing them to his Nazi users. They’re part of his revenue stream. That’s all.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s