#DeleteFacebook Isn’t About Data Security

Reading Time: 3 minutes

In the wake of the Cambridge Analytica scandal, Facebook might actually be in trouble. The #DeleteFacebook hashtag is trending, and it’s seen some unlikely contributors, like Blink 182 singer Mark Hoppus, and Brian Acton, the co-founder of WhatsApp, which was itself sold to Facebook. Meanwhile, Facebook stock has dropped by 10% this week so far. The FTC has announced that it’s opening an investigation into Facebook’s business practices, to determine whether Facebook violated its user agreement, an infraction which would come with a hefty fine. Mark Zuckerberg hasn’t made a public statement about the matter yet, but he’s been summoned by the UK Parliament. The bad news keeps piling up.

The obvious question is whether Facebook will survive, after whatever punitive measures are dispensed. And, while it’s possible that it won’t, it’s difficult to imagine how its extinction would come about. Its users could always leave, but there’s very little individual incentive to do that, and, given that a third of the world uses Facebook, getting everybody to quit would represent a massive coordination problem. Therefore, unless Facebook is banned outright, or somehow sued into oblivion, it seems likely that it will persist, if in some sort of regulated or otherwise curtailed form.

The less obvious question is: why now? This is by no means the only data scandal that Facebook has been embroiled in. Any intelligent consumer of digital media knows very well that Facebook is harnessing their personal data, and that such data has been treated carelessly before, and used for somewhat nefarious ends. Probably the most striking example came in 2014, when PNAS published a study by researchers who quite literally played with the emotions of Facebook users to find experimental evidence of Internet-based emotional contagion. More recently, earlier in March, it was revealed that Facebook’s researchers had told advertisers that it had figured out how to identify whether its teenage users were feeling desperate or depressed—and that this could be worthwhile marketing data. Given all of this, it’s clear that data security isn’t the primary force driving #DeleteFacebook.

It’s much more plausible that what’s behind the media conflagration isn’t data security itself, but rather the involvement of Donald Trump. Some have claimed that Cambridge Analytica was responsible for Trump’s election, having provided his campaign with personal data about voters that (maybe) offered unprecedented psychologiccal leverage, revealing which precise people could be viably targeted by propaganda. If you’re anti-Trump, and you believe this, then your beloved social network has unwittingly engaged in a large-scale erosion of democracy, which is to say, a technologically-driven coup by a candidate you don’t like.

This may not even be the case, by the way. The person who’s most loudly proclaimed that Cambridge Analytica was responsible for the election’s outcome is the now-suspended CEO of Cambridge Analytica, Alexander Nix. Ted Cruz’s campaign hired Cambridge Analytica, obviously didn’t win the election, and, as David A. Graham of the Atlantic reports, “found that CA’s products didn’t work very well, and complained that it was paying for a service that the company hadn’t yet built.” Corroborating this view is Kenneth Vogel, a New York Times reporter from their Washington Bureau, who recently Tweeted that Cambridge Analytica “…was (&is) an overpriced service that delivered little value to the TRUMP campaign.” He went on to claim that campaigns only signed up to secure access to the Mercer family—a rich line of big-time Republican donors—being that they’re major CA investors.

To sum up: Cambridge Analytica is only one of many organizations which have used personal Facebook data in a sinister manner, and its use of that data might have actually been inconsequential. If this is the case, #DeleteFacebook offers a clear lesson to tech companies, which is that it’s not actually important whether your product or service unscrupulously surveils its users. It’s more important to ensure that your company doesn’t give its data to anybody particularly unpopular, especially if they end up getting elected. If you sell your data to relatively unproblematic clients, you’ll probably be okay.

Cyborg Chess and What It Means

Reading Time: 3 minutes

When arguably the greatest chess player of all time, Garry Kasparov, was beaten by Deep Blue in 1997, some took it to mean that human intelligence had become irrelevant. For instance, Newsweek ran a cover story about the match with the headline “The Brain’s Last Stand.” However, the chess-related conflict between human and computer cognition turned out to be somewhat more convoluted than that.

In the wake of the match, Kasparov came up with a concept he called “Advanced Chess,” wherein computer engines would serve as assistants to human players—or the other way around, depending on your perspective. Kasparov’s idea was that humans could add their creativity and long-term strategic vision to the raw power of a computer munching through plausible-seeming variations. He thought that, perhaps, in long games, such cyborg teams could beat computers, complicating the idea that human intelligence had simply become obsolete.

He was right. Highly skilled cyborg players turned out to be stronger than computers alone. Most famously, in 2005, a cyborg team won a so-called “freestyle” tournament—one in which entrants could consist of any number of humans and/or computers. And, even more surprisingly, the tournament was won by a pair of relatively amateur players—Steven Cramton, and Zackary Stephen, both far, far below master strength. They came out on top of the powerful program Hydra, as well as esteemed grandmasters like GM Vladimir Dobrov. And the secret to their success seemed to be that they were the best operators—they had figured out the ideal way to enhance the chess engines’ intelligence with their own.

In other words, for the human half of a cyborg team, being a supremely good chess player wasn’t as important as knowing how to steer computer intelligence. AI manipulation was itself a relevant skill, and the most important one. Cramton and Stephen ran five different computer programs at once—both chess engines and databases which could check the position on the game board with historical games. Using this method, they could mimic the past performances of exceptional human players, play any moves that all the engines agreed upon, and more skeptically examine positions where the different engines disagreed upon the right way to proceed. Occasionally, they would even throw in a slightly subpar but offbeat move that one of the programs suggested, in order to psychologically disturb their opponents.

This is kind of a beautiful picture of computer-human interaction, in which humans use computers to accomplish cognitive tasks in much the same way that they use cars to accomplish transportation. However, there’s a strong possibility that this rosy picture won’t last for long. It’s possible that, eventually, chess engines will get strong enough that humans can’t possibly add anything to their strength, such that even strong operators like Cramton and Stephen would, if they tried to provide guidance, only detract from the computer’s expertise. In fact, this may have happened already.

In May of 2017, Garry Kasparov said in an interview with Tyler Cowen that he believed cyborg players were still stronger than engines alone. However, that was before Google’s AlphaZero chess engine, in December of 2017, absolutely destroyed a version of one of the world’s best chess programs, Stockfish. AlphaZero, which was grown out of a machine learning algorithm that played chess against itself 19.6 million times, won 28 out of the match’s 100 games, drew 72, and lost not one.

What was more notable even than AlphaZero’s supremacy was its style. AlphaZero played what seemed like playful, strange moves: it sacrificed pieces for hard-to-see advantages, and navigated into awkward, blocked-up positions that would’ve been shunned by other engines. Danish grandmaster Peter Heine Nielsen, upon reviewing the games, said “I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.” If there’s any computer that’s exceeded the capacity of cyborg players, it’s probably AlphaZero.

This progression—from the emergence of strong AI, to the supremacy of cyborgs, to the more complete supremacy of even stronger AI—could pop up in other fields as well. Imagine, for example, a program, Psychiatron, which could diagnose a patient’s mood disorder based on a momentary scan of their face and voice, searching for telltale signs drawn from muscular flexion and vocal intonation. That program would make psychiatrists irrelevant in terms of diagnostic process.

However, you might still need a psychiatrist to make sense of Psychiatron’s diagnostic to the patient, and provide that patient with a holistic treatment that would best address the many factors behind their disease. Psychiatron would simply enable psychiatrists to be better. Eventually, though, that cyborg team might be superseded by an even stronger Psychiatron, which could instantly dispense the right series of loving words upon making a diagnosis, as well as a carefully co-ordinated package of medications and an appropriate exercise plan, all through machine learning techniques that would be completely opaque to any human operator.

This is a version of the future that’s either utopian or nightmarish depending on your perspective—one where we are, as Richard Brautigan wrote, “all watched over by machines of loving grace,” who, like parents, guide us through a world that we’ll never fully understand.

Does Social Media Really Polarize Our Politics?

Reading Time: 3 minutes

It’s often said that social media has a polarizing effect on our politics. And, on the surface, this narrative makes a lot of sense. The polarization of politics has continued as social media has taken over our brains. And what social media does, among other things, is make a game of earning the approval of your peers, thus solidifying your group identity. When you post something that pleases the sensibilities of your cohort—whether it’s a handsome selfie or a solemn plea for stricter gun control—you get the satisfaction of an immediate bombardment of friendly notifications. The reward structure of the social media experience doesn’t provide incentives for expressing minority views, or objecting to the prevailing narratives, or befriending those who disagree with you.

Moreover, Twitter and Facebook aren’t great places for dialogue. Political arguments are usually futile in real life, even with all of the felicitousness provided by face-to-face interaction. It’s much worse when ideological disagreements need to be reduced to 280 characters, or haveto compete with cute pictures of somebody’s baby. In this setting, sensitivity and nuance doesn’t play well. What gets the most attention is pithiness and aggression. In short, social media enables the self-congratulation and self-separation of mutually hostile political factions. Sounds pretty polarizing, right?

Yes. However, there’s a big and obvious question here, which is whether this is actually any different from the pre-Twitter media landscape. Long before Facebook was ever a gleam in Mark Zuckerberg’s eye, the various political classes selected the media that was most collegial to their respective worldviews. To take America as an example, in previous decades, Christian conservatives tuned into right-wing talk radio to hear about the horrors of the gay agenda, whereas elite liberals picked up Harper’s to read about the horrors of capitalism. (This is still true today, in part.) Bubbles and echo chambers exist in absence of Twitter. All that’s required to create ideological homogeneity is tribal self-selection or homophily—the tendency of people to hang out with people who are like them and agree with them, given freedom of association. That’s definitely a pre-iPhone tendency.

But, of course, it’s still possible that social media has enhanced tribal patterns of behaviour—that this is not a difference of kind, but it is a difference in degree. So, if we check the data, what do we find? Well, it appears that social media does, in fact, have an effect on polarization. It’s just the opposite effect that critics might expect. According to a demographic study by Boxell et al., published by Stanford University, political polarization is actually less pronounced among demographics that use social media more often (young people, essentially). This shows that it’s unlikely that social media is a more powerful driver of polarization than old-fashioned media. (Or it shows that, even if social media does polarize, there’s some countervailing anti-polarizing force that’s much more powerful.)

And, like the just-so story about why social media polarizes, there’s an appealing readymade narrative about why the opposite might be true. While political disagreements on Twitter and Facebook tend to be shallow and nasty, they’re still genuine disagreements—something that doesn’t usually occur in traditional media. The New York Times doesn’t contain a second page declaring that all the articles on the front page are slanted. And while it’s true that debate programs are a staple of political television, such programs are usually staffed by a preexisting team who are paid to perform a predictable set of reactions to ongoing affairs. Meanwhile, on Twitter, it’s quite easy to run into novel objections to everything you believe in, which, even if they aren’t particularly convincing, might compel more considered private reflection.

Or maybe it’s even simpler than that. It’s possible that young people are less polarized because social media is so nasty and tribal. While a minority of social media influencers make a lot of provocative noise, it’s possible that the non-contributing majority is quietly alienated by the vitriol. While a controversial tweet with 1200 retweets looks impressive, there’s no way to measure the number of users who have quietly rolled their eyes and moved on—or have simply quit Twitter altogether.

There’s a larger lesson here, which is that it’s unwise to infer narratives of societal change based simply on the most visible behaviour provoked by one app or another. (Another demonstration of this: millennials have way less sex than their parents, despite the existence of Tinder and all the moral panic surrounding it.) Ultimately, sensationalist narratives about the polarizing effects of social media are just the kind of thing that’s popular on social media.