Daniel Suarez is a New York Times bestselling author of three amazing hi-tech thrillers, and has become a darling of the augmented reality community. But his books, his real-world experience in IT security, and his ideas go far beyond any one technological field. His latest novel, Kill Decision, follows a covert military intelligence team as they scour the globe for the source of autonomous drone attacks against the U.S.. This very timely theme has gotten a lot of well-deserved attention.
But there’s a supporting theme in the book that is equally deserving of discussion–the idea of “sock puppetry” in social media. This is the phenomenon of people behind the scenes using armies of thousands or even millions of fake social media accounts in order to create the false impression of public support behind any given subject. (This is also called “astroturfing,” or fake grass-roots marketing.) In Kill Decision, shadowy intelligence operatives working for corporate and governmental employers use these sock puppet armies to perform “Influence Operations”–staged actions designed to rally and shape public opinion on a given topic.
Over lunch at Vox:the 4D Summit–an augmented reality conference in Los Angeles last month–I had an opportunity to chat with Daniel about these ideas. I have come to greatly respect Daniel on a personal and professional level, and his insights on social media manipulation proved just as informed and thought-provoking as his treatment of so many other technological issues of our times.
BW: Do you really think the degree of online sock puppetry and astroturfing that you describe in your book, Kill Decision, is happening now?
DS: No. I don’t think it’s a widespread phenomenon yet, but the technology exists for very few people to manage hundreds of thousands of online personas – and it’s becoming cheaper every day. For this reason I think the problem is going to grow. And let’s be clear: these are behind-the-scenes public relations companies, viral marketing firms, botnet herders – and sure, some government agencies. Literally every organization uses social media for brand management, and news organizations track social media for what’s trending. That means even the perception of popularity can go viral. Thus, shortcuts will be taken to gain followers and positive mentions, creating the conditions for virality.
And doing so is cheap. It only costs about $80 per 5,000 fake Twitter followers (Facebook friends cost several times more – but followers and positive reviews can be purchased for just about every major site). Think about it: roughly $12,000 for a million online ‘people’ to add credibility to your cause. Aged accounts (those that have been around for years) will cost more. Automating discussion among those fake followers – aka ‘persona management’ — is what sells the illusion of a massive upwelling of popular support.
I think what’s been detected so far is just the tip of the iceberg when it comes to sock-puppetry. If a person or organization isn’t troubled by ethics, and they want to quickly influence public opinion – i.e., make people think something big is happening when it really isn’t – then they’re going to create armies of fictitious followers. It’s just too easy in an open, anonymous system like today’s Internet.
It might seem like a victimless crime. For instance, let’s say I’ve got a rock band–and suddenly a hundred thousand people ‘like’ my band (because I bought followers), I’ll sell more downloads or tickets–it’s not going to bring the world down. But in the aggregate, it undermines faith in all online discourse.
DS: I wouldn’t go that far. We can still have discussions with known friends in social media. Dealing with friends is naturally different than taking the temperature of what’s trending collectively or taking the advice of strangers. When we’re dealing with friends what we’re doing is being careful about whose opinion we let distract us. And that more closely mirrors what happens in real life. With all other things we see or read on the Internet, our default position should be skepticism.
DS: Yes, and this will probably be more daunting than my drone treaty idea. I think we need two Internets. We need the existing internet, which is fairly open, anonymous, and insecure in its design – that can be used for games and anonymous communications. And we need another network designed from the ground up for security where you don’t get to conceal your identity; on this second network we would run all our financial transactions, our dam sluices, gas pipelines and power grids, voting, legal proceedings –all of our critical infrastructure functions. Anything that truly needs to be secure (i.e. one person, one account). Thus, the gossip on the old, insecure Internet could remain just that: gossip. While on the second, more secure and identity-enforced Internet, it would be much more difficult to fake identities and public discussion would be both more substantive and more civil because there would be social consequences for inappropriate actions. Sock-puppets would be scarce here because they wouldn’t be cost-effective, and they couldn’t be created in large enough numbers to make a difference.
Of course, this might sound like an Apollo-level national project in terms of cost, but in the long run it would be well worth it to have a second, more secure Internet. We can’t keep connecting critical systems to an inherently insecure architecture (which is what TCP/IP is), and we can’t run high-trust social interactions on an anonymous basis. Just ask anyone in IT security whether it’s possible to keep a determined and skilled attacker out of your network if it’s hooked to the Internet. The answer might depress you. Then ask how many of those IT security pros have faith in antivirus software (very few). That should tell you all you need to know about trust on the existing Internet.