Recently I started writing an essay about AI development, exploring its history and its future. Somehow it got away from me and became a small book. Order it here please, ideally in triplicate!
The 2000s were not a good decade for trust. After the Iraq War and 9/11 and the Great Financial Crisis, the public trust in much of the world sank low. Banks were not trusted at all, having greedily plunged the world into the great recession. Governments were barely trusted, having lied to the public several times, including about the banks. The other major industries were already not trusted, whether that’s Big Oil or Big Pharma.
A notable exception was the tech industry. People had high hopes from the world of technology, still seen as an interesting curio. Not that long ago dot.com had burst, but the vigour had renewed, including from the launch of revolutionary products like the iPhone. Apple dethroned Exxon as the most valuable company in the world, and Marc Andreessen thought software was eating the world.
No company was more a poster child of this than Facebook. It had shaken off the troubles of the crash. Started by a wunderkind, one of the first startups to receive large sums of funding from VCs, and with a vision so vast it had the entire world as it's potential customer base.
Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected
In 2012, Mark Zuckerberg took Facebook public. After some haggling he set the price at $104 Billion, the highest for any company thus far, share price $38. But they were still darlings, just maybe a little too ambitious for what they were. A week after, the investors didn’t seem impressed. From Techcrunch then:
The share price already dropped below $20 two weeks ago prompting some to ask whether it was cheap enough to start buying. But it still has far to potentially fall. As noted by WSJ’s Dennis K Berman, if it traded at Google’s same 14.7x earnings multiple on future earnings, $FB would be at $7.97.
Google had gained well on its IPO, and closer to home LinkedIn had done really well, up more than 100% in a day. And Mark compounded the worries by doing things that nobody understood. While advertisers were still evaluating, he spent $1 Billion to buy Instagram, which was also widely seen as a silly move for a company with only 13 employees.
Even so, there was still an undercurrent of tech optimism. Mark’s vision of connecting the world was seen as a Good Thing. The Great Financial Crisis had just ended and austerity continued, and tech gave us hope.
Occasionally there were news articles about how Facebook was bringing about some potential privacy issues, starting with the idea that publishing our kids information could become problematic. And a few even quit social mediabecause they weren’t comfortable with the noise and wanted more authenticity, but without making much noise.
Then came Arab Spring, and tech was not only seen as good, but essential to the flourishing of democracy around the world! An unalloyed good even though it seemed mostly pointless for the older generation.
In the next few years though, all this changed.
First came Edward Snowden’s NSA breach in 2013, bringing to light a panopticon that the state was running. It was the second major hit after the knowledge that the government had lied about WMDs to invade Iraq where the idea that the government lies to us became real. Combined with the incompetence leading to the financial crisis, the faith in the government was well and truly shaken. But while the NSA used tech, the industry was mostly spared, though starting to become suspect.
Then, in 2014, came the famous “emotional contagion” study, where Facebook changed newsfeeds to study emotional responses from its user base. What seemed until then to be a relatively harmless use of the ability to do societal scale A/B testing turned out to be taboo in the worst of ways.
Newspapers ran horrified headlines likening this to people being treated as literal lab rats. We got comparisons to the Stanford Prison Experiment. Facebook apologised profusely, but the trust had sprung a leak.
And by 2016 we had Brexit, amplified by false messages that the political parties used to spread their preferred agenda. Facebook and social media were seen as complicit in allowing the lies on their platform to proliferate, even though those who fanned the flames were often elected officials or our own friends and relatives.
And then we had Trump. And the same broke.
Whatever good will the tech industry had in the early 2010s, whatever utopian visions they had and benefits they brought, they all got tarred with a horrible brush. Social media was blamed for amplifying the lies from the commander in chief, for being too permissive in letting it go and not permissive enough in banning troll accounts. They could do no right.
And, somehow, then it got worse!
In 2018 Facebook had the Cambridge Analytica scandal. One that, in hindsight, turned out to not really be Facebook’s fault entirely nor very effective, but it still stands tall as an example of the way social media companies in particular, and tech industry in general, act against the interests of humanity. YouTube, another quasi social media, got enormous flak for supposedly pointing people towards extremist content and radicalising them.
Big oil didn't care about climate change. Big tobacco didn't care about causing people cancer. Big banks didn't care about anything, and Big Tech was seen as the same!
And then we had celebrities emerge from this societal discomfort. Tristan Harris, former Google design ethicist, founded Center of Humane Technology and critiqued the “attention economy” that social media wrought. He wrote a large presentation that excoriated the tech industry for making everyone addicts and destroying our humanity, sort of the exact opposite of what they all had as their founding missions.
Or Jaron Lanier, a computer philosophy writer who explored the dehumanising aspects of tech in books like You Are Not a Gadget. He argued you should delete your social media accounts, arguing the now common feeling that you are the product. The book wasn't successful but it sold well.
Or Roger McNamee, who was an early investor in Facebook and wrote Zucked in 2019, about how Facebook was terrible for democracy and public discourse. And Eli Pariser wrote about how we’re all in our own “filter bubbles”, arguing the algorithmic feed promotes positive feedback loops and only shows us things similar to what we already think. That we're all getting more confident in our eco chambers.
Yuval Noah Harari called free information dangerous and suggested tech executives should be jailed for allowing AI profiles. More reecntly Francis Haugen quit Facebook, and then predicted millions would die from social media, reported in her book. Jonathan Haidt wrote excellent books on how social media is to blame for the epidemic of teenage depression.
And so many more.
Social media became the villain. The cause for the division in our society, led by unscrupulous sociopaths, and in dire need of strong regulation.
Now, the problems were brought to light because of social media, and their supposed impact, but soon had blowback onto the broader tech world, each feeding back into each other and increasing the societal distrust with the entire world of tech.
Not that it wasn't all underserved. Google had a data breach too then, which led them to shut down Google Plus, which a few people noticed. Apple admitted to intentionally slowing down older iPhones, supposedly to protect aging batteries, but without telling the users! And Uber. Well, Uber.
Just as the general impression most of the world had about tech flipped on its head through the relentless narrative on their moral and ethical failings, the unchecked hubris and the seemingly unstoppable rise of the tech industry, the calls for regulations and oversight rose on them too. The whole of technology industry became seen as complicit, following through the old adage that power corrupts.
Europe created GDPR. California created CCPA. FTC fined multiple companies billions of dollars, the public and lawmakers and media all got aligned that something ought to be done. Nobody quite knows exactly what, since social media isn’t straightforward like tobacco, or even oil companies. The ideas span from removing Section 230 to creating a Department of Technology Oversight to just breaking them all up.
These measures focused on privacy, market competition, data management, and content accountability. What all these ideas have in common is that none of them have an idea of what end result they’d like in detail, nor how to get there, but would like someone else to think very carefully about the problem and hopefully solve it. After all, if you have a billion people say things that each other online, it's not easy to answer how exactly you'll balance what they can say against the local laws, global laws, US laws, the limits of management, and whatever principles the company themselves has. The goal remains mainly to curtail the unchecked power and influence of tech giants.
But the emotions are still true. We don't trust them and as tech has crept into every nook of our lives, we don't want them to have power over us. Unlike the world in the early 2010s, we now live in the world that social media wrought. A world were technology is suspect, and technology companies are bad actors who prey on their consumers, and who need to be brought to heel. Where people like the guy who quit using Facebook with 500 friends get far more more attention. We’re far less trusting of institutions in general and companies in particular. And no wonder, don’t think the world is getting better and are convinced the experts are lying to us.
Can you imagine how we'd react to anything new in such an environment? If someone came up with a new invention? Possibly become powerful enough to want to make us make it govern our lives?
As we see now, since we have a new worry, of AI as an existential threat. Much worse than social media, because it subsumes the worries about social media where it, the purported super-powerful AI, can confuse and rile up and misinform the public, and much worse than every other technology we have put together.
And we’re primed from the worrying over the past decade to do something. We’ve been hearing for a decade about how the government have been negligent, asleep at the wheel, where we can’t trust the industry to regulate itself nor even to trust the market consensus on what we like to do.
We exist in this world that social media has built for us. And this time around society stands ready, if slightly traumatised. We don’t want to have to answer again for being asleep at the wheel or of being bad actors. And so we are falling into the trap of doing something, anything, to not have this happen again.
I enjoyed reading your history very much, but as someone who does have concerns around AI, I was confused by the phrasing in the last line: "And so we are falling into the trap of doing something". I read the article before this last line as akin to a bird's eye view of the history leading up to our current moment in tech, but this line seemed to suggest it would be a mistake to try to "do" something about the potential risks of AI. I'm definitely not an AI doomer, but I don't clearly see the argument against learning from our past mistakes with social media and applying them to AI? I think someone like Sam Altman may even have taken heed against the unchecked "move fast and break things" philosophy. Enjoyed the piece.
Very valuable history. Thanks , Rohit. As Peter Gabriel sang, “Lord, here comes the flood. We’ll say goodbye to flesh and blood.” One aspect of social media’s effect on the economics of music can be seen either as a straitjacket or an opportunity. Artists are seen not to exist if they don’t have a strong social media presence. This does compel us to find creative ways to be noticed, and also, increasingly after the disastrous effect of the pandemic, has made live performances more vital and important. I’m not talking about the Swift Beyoncé Sheeran mega-machines in particular but of venues all the way down to 50 capacity. It seems that people want to see and hear something real!