When is the last time you checked Twitter, Facebook, or Instagram? Last night? Before breakfast? Five minutes ago?
If you do have a social-media habit, of course, you are not alone: about 3.5 billion people are active participants. Globally, during a typical day, people post 500 million tweets, share over 10 billion pieces of Facebook content, and watch over a billion hours of YouTube video. Since our brains are wired to process social information, it’s hardly surprising this technology has grown so popular so fast. But as most people are also aware, it has a dark side.
“Social media disrupts our elections, our economy, and our health,” says Sloan professor Sinan Aral. In The Hype Machine (Penguin Random House, 2020, $28), Aral details why social-media platforms have become so successful yet so problematic.
“This machine exists in every facet of our lives,” Aral says. “What do we do? How do we achieve the promise of this machine and avoid the peril? We’re at a crossroads.”
Aral, who has been studying social networking for 20 years, was part of the team behind a 2018 study showing that false news stories shared on Twitter were 70% more likely to be retweeted than true ones. Why? Most likely because false news has greater novelty value and provokes stronger reactions—especially disgust and surprise.
And such responses are precisely what bring in audiences and revenue. “The business models that run the social-media industrial complex have a lot to do with the outcomes we’re seeing,” Aral says. “It’s an attention economy, and businesses want you engaged. How do they get engagement? Well, they give you little dopamine hits, and … get you riled up.”
The political implications are sobering. During the 2016 US presidential campaign, Russia spread false information to at least 126 million people on Facebook and another 20 million on Instagram. “I think we need to be a lot more vigilant than we are,” says Aral.
To that end, he favors automated and user-generated labeling of false news, and measures to minimize the ad revenue that content creators can collect from misinformation. He believes federal privacy measures are potentially useful and calls for data portability and interoperability, so consumers “could freely switch from one network to another.” He does not endorse breaking up Facebook, suggesting instead that the social-media economy needs structural reform.
But without change, he adds, Facebook and the others risk civic backlash. “If you get me angry and riled up, I might click more in the short term, but I might also grow really tired and annoyed by how this is making my life miserable, and I might turn you off entirely,” he says. But bad outcomes are not inevitable—for the companies or for society.
“Technology is what we make it,” he says, “and we are abdicating our responsibility to steer technology toward good and away from bad. That is the path I try to illuminate in this book.”