The Death of Klout is not the end for Influencer Models

Reading Time: 3 minutes

Amongst the General Data Protection Regulation’s (GDPR) first casualties, Klout stands out for the lack of sorrow at its demise. The premise was simple enough: distilling users presence across multiple social media platforms to give a single score. The (almost always) two digit number bears an eerie resemblance to the rather vague and sensationalist descriptions of China’s social credit scheme, Sesame Credit – albeit several years ahead of the Communist Party’s alleged plans.

The premise was flawed for several reasons. For one, there were concerns about the ethics of an opaque system for measuring social media influence, not least one boiling users’ influence down to a couple of numbers. Secondly, and perhaps more pressingly, Klout’s model was (for want of a better word), useless. Rather than showing anything meaningful about social media influencers, it did little more than aggregate scores (often woefully poorly). When a social media score did little more than  Even worse were its descriptions of influencers areas of specialism: as The Drum pointed out, Klout’s view of Pope Francis portrayed him as both an expert theologian and a leader on Marxism, warfare, and Miss Universe. Such profiles did not fill the world of marketing and PR with great hope for Klout, which is winding down on May 25th (the same day as GDPR).

Whilst the regulations undoubtedly played a role in the downfall of Klout (a service which almost certainly didn’t play by regulations in terms of data collection and processing), its failure to make a meaningful service was almost certainly at its core. That’s not to say that studying influencers is worthless for marketers, journalists, and communication professionals – just that smarter ways of studying influence are necessary.

One of these comes from Cronycle’s service. In addition to using Twitter data and network analysis to produce our Insight Reports. Cronycle keeps tabs on influencers across multiple topics through our Right Relevance platform across dozens of topics. Rather than giving users a single score, they receive scores for individual topics and sub-topics – this more granular approach is more valuable since it allows users to narrow down on the specific expert or influencer they want. It also builds up links with related influencers, creating networks which reflect underlying similarities and ties.

An image of top influencers on the topic of GDPR. The sliders on the right allow for users to narrow down on the group they are particularly interested in.

The service extends beyond Klout’s focus on numbers, though. At the broader end of the scale, Croncyle’s service gives a dashboard allowing you to search through topics, compare trending hashtags, look at the top influencers and domains, and see related topics.

The Cronycle Influencer and Topic dashboard for AI

Cronycle users can also search through articles by top influencers on their areas of speciality (as well as through related topics), giving both the tweets by the influencers and their articles. Domain searches are another feature, giving a list of top topics and top influencers for specific sites.

The final aspect is Topic Intel, which allows users to compare a single subject across time – an equally important comparison to that between different subjects.

Topic intel for AI and Machine Learning

Users can easily find how the top spots have changed – or not – for their subjects, as sorted by retweets or mentions (all Twitter activity).

Klout may be dying, but the influencer model is by no means moribund. Holistic approaches, like Cronycle’s, build on Klout’s work of showing influence through a numeric system but seriously ramping up the extra information required to make that useful.

GDPR – Just one month to go

Reading Time: 2 minutes

The General Data Protection Regulation (GDPR), the Europe-wide change to data protection, comes into effect on May 25th. Companies around the world who deal with European citizens’ data will be affected – we’ve already seen tech giants like Facebook and Google scrambling to save their business models, knowing that the fines which can be levied against them through GDPR are too big to ignore. But GDPR isn’t just about penalising the big players who have run roughshod over customers for years: small organisations will also have to become compliant.

Judging by the conversation which Cronycle monitors on Twitter, what being compliant means is still a subject many companies are struggling to work out. GDPR extends a lot of existing data protection regulations, giving provisions for consumers such as asking for their data in a portable format, limiting data hoarding, and attempting to ensure that both data controllers and data processors are kept in check. And we’re still yet to see all the national laws which will come into effect alongside GDPR, adding new levels of restrictions on data processing and handling.

If any of that sounds confusing, Cronycle has you covered. We’ve been monitoring the leaders and trends in the GDPR conversation for the past five months in our Insight Reports: with these, you can find out exactly what people are discussing when it comes to GDPR, and who it’s worth following for advice on the implications. We’ve recently added a new section, top articles, which covers the top 10 most influential GDPR articles for each month – these include news stories on big companies which are falling foul of data protection laws as well as guides and tips on how to stay compliant and on how to make the most of GDPR. We also ran a blog at the start of the year featuring five of the best guides on how to tackle compliance.

We also launched our GDPR Slack channel in February. Subscribers get key content, including both Insight reports and content from the top influencer identified by Cronycle’s algorithm. The channel is highly modular, so you can add on other topics of interest to keep abreast of developments in these areas. Finally, we’re hoping that the Slack channel will act as a community hub, so users can invite colleagues and friends to the channel.

Stronger regulations like the GDPR are key to ensuring that the sort of abuses which Facebook has been found guilty of come to an end – but implementation has to be balanced with a recognition of the risk to small businesses. Stay up to date with all the key points with Cronycle’s coverage.

Is Facebook going the way of Bell? Don’t bet on it

Reading Time: 3 minutes

America’s industrial scene has long been marked by monopolies – and by government, attempts to break them and ensure fair trade. Rockefeller’s Standard Oil, once the world’s largest oil company, found itself facing that economic reality in 1911 when the Senate at last succeeded in an anti-trust lawsuit. With Facebook in the crosshairs from both sides of the political aisle, there have been suggestions that the social media giant might be just be too big to stand as well. As South Carolina Senator Lindsey Graham argued to Mark Zuckerberg over his two day hearing on Capitol Hill, Facebook’s spread across apps and platforms makes it feel discomfortingly close to a monopoly (even if Zuckerberg feels otherwise).

Yet in the tech industry, there’s a better example of a monopoly which escaped the hatchet: Bell Systems, which for over a century held sway over American telephony. Unlike Standard Oil, which had sought to fight the government, Bell was smart enough to ask to be regulated. In return for cutting off parts of their enterprise, they got to maintain their monopoly until 1983. Not bad going, all things considered.

A government-regulated monopoly on Facebook would be beneficial for both parties. Facebook would get to keep on keeping out the competition. The US government would be able to have the peace of mind that they were regulating the biggest source of their citizens’ data outside of their own servers. And that’s not even considering that regulation would almost certainly offer a backdoor for accessing said data for national security purposes.

There’s also that fact that regulation avoids one danger of a break-up: strengthening Chinese tech companies. That was seen as such a trump card that it made its way onto the notes that Zuckerberg brought to the first day of his hearing. It’s a fair point, perfectly played to America’s policymakers with their endless references to Facebook as an ‘American company’. Given the lack of a Western competitor, China’s all-in-one WeChat (artificially grown in a state capitalist vacuum) could have the utility and the clout to take over at least some of Facebook’s roles. For US politicians, the idea of a company with close links to China’s government is probably even less appealing than a company with dubious links to Russia’s.

Of course, Bell was dealing with telephones and telegrams: simpler technology, and far easier for a government to wrap its head around than social media, data protection, leaks, and so on. The level of technical expertise on offer at Zuckerberg’s hearings in the States, in general, has not been the most impressive. What form the regulation would take is also difficult to see: perhaps monitoring of the types of data shared with third parties and available to Facebook employees themselves.

That still leaves the question of political division. Whilst concerns about Facebook are shared across the spectrum, the reasons for those concerns are not. Democratic politicians attacked Zuckerberg largely on Russian interference and the use of the site’s advertising platform for discrimination. Republicans routinely claimed that conservatives were being censored, with live-bloggers Diamond and Silk repeatedly being presented to Zuckerberg as victims of his site’s liberal agenda. Moving towards a consensus – beyond agreeing that Facebook has made a colossal blunder – seems almost inconceivable.

And finally, there’s the fact that this should have been a chance to grill a man in charge of the company which handed over immense amounts of data to dubious researchers and even more shady firms – whilst instead, we got more than a little bonhomie. Between the struggles of lawmakers to actually understand what Facebook does, and the rare cases of tough questioning that didn’t allow Zuckerberg to return to his script, there was far too much bonhomie: asides asking for rural internet, offering top tips for recruiting spots, even the outrageous attempt to curry favour by stating that Zuckerberg’s alma mater in Westchester, New York, was proud of him. In what were nominally occasions in which Mark Zuckerberg was to be cut down to size, America’s politicians made clear Big Tech’s immense staying power. The lure of an industry which offers jobs and money – and re-election – was too big, apparently.

Facebook has, for a long time, not been a product which people show real excitement about. Millennials, the hip generation for platforms, have increasingly voted with their feet or are vocal that they view it as a tool for keeping in touch with family. And yet, despite all the bad press and Silicon Valley screw-ups, the site has continued to hold on and grow. The immense fines from GDPR might hurt it enough to force a rethink of redirection, but don’t expect action from America’s politicians, too in thrall to the seductive power, money, and jobs of Big Tech.

Sockpuppetry is the new normal. That doesn’t make it healthy

Reading Time: 2 minutes

It’s been nearly three years since Adrian Chen first discussed the Internet Research Agency, a euphemistically titled group which was pumping out pro-Kremlin and anti-American messages for money. At the time, eyes were elsewhere online – it was the year of Ashley Madison, after all, and a time when China ranked alongside Russia (if not higher) in the space of public paranoia.

Even then, the idea of governments astroturfing spaces (i.e. hiring online goons to make it look like their actions had far greater support than in reality) was not new.  China’s 50 Cent Army (a joke on the low rates supposedly paid to the sockpuppets posting pro-Communist Party propaganda) had been floating about since the early noughties. The US had branches for this sort of thing since 2010. The Snowden leaks revealed the UK had been indulging in this sort of thing too.

The recent revelation that astroturing had been used on Reddit and Tumblr thus should have come as a surprise to no-one; that it was the Internet Research Agency involved in it, even less so. It’s tempting, in fact, to say that given that we know how prevalent this practice seems to be, we’re in a better situation than we were in 2015. We have more groups and institutes keen to fact check supposed Russian sockpuppets (more typically called trolls) than ever before. People are, in some ways, more wary than ever of information coming from certain outlets.

In practice, what that means is the disturbing brand of nihilism which once hung out on the corners of 4Chan has become increasingly pervasive. We live in the time of the simulacrum: in which every piece of information is supposedly an illusion, created to hide the lack of any real truth. The videos you watch are going to be created through neural networks, so you can’t believe your eyes; the words you read have already become synonymous with fake news regardless of how you lean politically. To be savvy means to trust nothing, or to go hard in support of institutions which have serious questions about impartiality (Hamilton 68, though a fascinating and at times useful resource, has understandably been accused of being a little too close to the old Cold Warrior mentality of NATO).

The point of sockpuppetry is to inflate the influence of a group, or a country, or an opinion, and to discredit opposing points of view. In some regards, it’s done the latter far more effectively than could have been hoped. Practically any deviation from the norm is now treated as evidence of yet more Russian interference. Sockpuppets, in short, have turned fairly rational and moderate individuals into conspiracy theorists. And conspiracy theorists are, by the nature of society, a joke at best and a concerning fringe at worst.

The age of disbelief is upon us. A little cynicism is healthy; a lot is not. The failures of social media companies to combat this behaviour is a blight which will continue to linger on long after the scandals over Russia have died down.

The Efficacy of Bot Purges

Reading Time: 2 minutes

There’s something rather brutal about the idea of bot purges – perhaps a reflection of the humanity with which we endow human-looking accounts. Twitter’s decision to suspend thousands of accounts last month was less horrific than it sounded: a small attempt to solve a problem which has plagued the platform for over a year now, and has brought into the radar of US and British politicians as an ambivalent if not mercenary hawker of propaganda.

This didn’t stop it from earning the ire of conservative accounts (including some of the more prominent members of the alt-right and related conspiracy theorists), who had found thousands of their followers eviscerated. To them, it represented just another part of the ongoing skirmish with big tech which had attempted to shut them out of the marketplace, or flag their ideas as fake news.

It doesn’t help that at least some of the accounts which Twitter targeted were real conservative commentators rather than strings of code. In fact, it only adds to the ongoing debate about what a bot really is. Granted, there are the most crude examples of accounts which blurt out the same message over and over, but there are plenty of humans who perform similar functions. On the one hand, this arguably says more about the state of Twitter discourse than anything: if your posts look like they’re written by a painfully simple programme, you’re almost certainly not adding a lot to the conversation. The problem only intensifies when we consider that a lot of bot accounts are ‘cyborgs’, partially automated but with a human who can step in as and when necessary.

And then there are sockpuppets – better known as trolls – who voice an opinion for money or patriotism. They’re by no means new or unique to Twitter (Wikipedia has struggled with them for years and years), but at a time when social media has increasingly come under scrutiny, it’s difficult to ignore them. Of course, bot purges can’t capture them: Twitter can only stop automation, not insincere or unethical tweeting.

And therein lies the crux of the problem: purges of bot accounts are like putting a plaster over a serious wound. The people running them will have them back up and running, not least because they’ve proven very quick to adapt to research on automation and integrate it in their strategies. A longer time solution is necessary to restore any meaningful trust in social media. Rather than engaging in heavy-handed crackdowns with little explanation – relying on people not caring enough about what algorithms are really responsible for figuring out who alleged bots are – Twitter could do a lot better with emphasising transparency. Because, at the end of the day, Big Tech has never shown a great concern about its role in affecting democracy, for better or worse: instead, its actions have been dictated by PR concerns.

Moving away from the pedestal of the Philosopher King could do Twitter a little good – and society a whole lot more. Fight bots, by all means, but encourage media education and engage with people too; that’s the way to make the change more than skin-deep.

The Strava heat maps are a grim reminder of Big Tech’s power

Reading Time: 3 minutes

The lone hacker used to be the stereotypical threat to national security: think of Kevin Mitnick, who prosecutors claimed could have started a nuclear war by whistling into a phone. Then it was the citizen activist, whistleblowers (perhaps affiliated to WikiLeaks). Then it was the state-sponsored group – perhaps Chinese, perhaps North Korean, perhaps Russian. The Stuxnet virus (which damaged equipment in Iran, and which most likely was a result of US-Israeli teamwork) showed that the digital world could have a very real impact on the physical, putting a whole new urgency on the need to keep computer security levels as high as possible.

So it seems almost anticlimactic that it was an app for tracking where users were running that has undone so much secrecy for the US military in particular. It’s a lesson that perhaps the greatest threat to operational security lies in the treasure trove of data which we often unwittingly produce – and which private companies, in jurisdictions with limited governmental oversight, often think little about.

The case of the app, Strava, is almost farcical: the company decided to publish heat maps of its users to show off it the success and got more than it bargained for when the visualization effectively gave detailed maps of routes in military bases around the world. Much of that data may be of limited use to opposing nations or non-state actors (particularly where Strava doesn’t seem to have been so heavily used). On the other hand, in spaces where the heat maps are bright, the information visualization essentially sketches out a handy blueprint for troop movement. At its worst, it highlights locations where military forces were not known to be. It is, to put it lightly, a fairly horrifying outcome for the US armed forces, since Strava is far less heavily used by opponents (whether they use different apps or simply lack activity counters remains to be seen).

Strava, in a move either impossibly brave or impossibly foolhardy, at first went on the defensive and suggested that military personnel should have opted out. There is a nice logic to this for the company (who have kept the heat maps up on their site), since it puts the onus onto the consumer. This is the stance taken by social media platforms like Facebook, whenever an embarrassing event has come to light: you should have put your privacy settings higher!

This is also a neat effort to glide over the conflict of interest in this argument: lower privacy settings equal greater data collection, which means more information to sell on to other companies. By putting privacy as low as possible by default, and by ensuring that changing this is not a simple process, companies like Strava get to have their cake and eat it. The very fact that Strava has now offered to change how its privacy settings work should not be read as a company owning up to its mistakes: it’s a PR move and volte-face as part of an attempt to cover up its own shortcomings.

Designs are not just confusing because of incompetent programmers – in fact, the opposite can be quite true. For too long Big Tech has gotten away with hoovering up Big Data, before belated fixes which do little to help with all the information collected. This is only going to get worse as we head further into the age of the Internet of Things, where privacy policies will be increasingly obscure, and opting out of a small, screenless device will be practically impossible. Whether its consumer pressure or a governmental crackdown (such as a heavy enforcement of the GDPR on the most egregious offenders), the Strava story is just another case of Big Tech’s disregard for all of us except in how much our data sells for.

Big Tech’s gamble is wearing thin – and why that’s bad news for us

Reading Time: 3 minutes

It’s now 22 years since Section 230 of the Communications Decency Act came into force: a landmark case even in 1996. It meant that those hosting information on the then nascent internet were not to be treated as “publisher or speaker” of any of that information. This made perfect sense at the time, when the internet remained a small curiosity, and in which it was fair to assume that small, practically amateur companies would have had difficulty managing to scan all the content they were hosting (the early giants like AOL even then seemed like a different story, but that seems by the by now).

Online content – and the avenues in which it is created and uploaded and perhaps most importantly, disseminated – has only grown since then. We are past Web 2.0, with its blogs and microblogs, and into the Internet of Things – and yet we are plagued with some very pre-digital problems online: child abuse, and terror, and defamation (admittedly in new forms, such as revenge porn). The tech industry’s answer has usually been to shrug when confronted with this sort of material. There’s the free speech argument, and then there’s the legal backing which legislation like Section 230 (and its national/regional equivalents) offer.

British Prime Minister Theresa May’s speech at Davos stands in rude contrast to this civil libertarian ethos. This is not a shock: May views the internet as another sphere which requires total government control and regulation, a mere expansion of the offline world. This is not a new view, and it’s one rooted both in government over-reach, and in a complete lack of technical knowledge. Consider Amber Rudd’s desire to combat the evils of encryption without having any understanding of how it works: there’s a rank arrogance to the professional politician which is only matched (fittingly) by that of the big tech company.

The collision course between the two has long been set, but it’s increasingly clear that public opinion has turned against the argument for laissez-faire. Facebook, Twitter et al long assumed that the utility they offered could trump governmental arguments that they should be regulated more heavily. However, a slew of stories about objectionable content – such as terror and borderline child abuse on YouTube, or targeted ads used to support hatred on Facebook – have increasingly eroded this position.

And that’s bad news for users. The current scheme is broken, admittedly (content providers seem to care much more about PR after scandals than actively working on solving major structural problems), but heavy government regulation is concerning at best. At worst, we can expect to see a quiet creep of illiberal regulations under the guise of national security. Lest we see this as too much of a conspiracy, let’s not forget that in the wake of the Snowden revelations, the British government decided to consolidate mass surveillance powers. By failing to self-police, big tech has fundamentally removed its own popular support base, allowing governments which don’t understand technology and which seek to gain more power in the name of national security an open goal.

Content providers and platforms are in no ways victims here – they are equally complicit in what amounts to a rising risk to their users. If they wish to truly avoid over-regulation, they need to move beyond the sort of measures which are patently designed to improve their appearance. Consider YouTube’s plans to fund counter-terror videos: does anyone really believe this will stop someone moving down the path towards radicalisation? A greater emphasis on moderation which goes beyond horribly underpaid contracts (with no support) or crude algorithms may be the only way to save them – and us – from a future which looks a little Orwellian.

Murdoch’s Right: Facebook would be better off paying publishers cash than lip service

Reading Time: 3 minutes

Rupert Murdoch, perhaps the most polarising media owner in the world, may seem an odd man to be a prophet of the future. He did sell off a large chunk of his media empire to Disney, after all: a move some have read as a retreat from the game of newsmaking which he lead at for so many years (often enough with dubious practices). It’s hard to side politically with the man whose papers were involved in phone hacking of grieving families, or whose television station routinely blasts out falsehoods.

But the 86 year old media mogul’s statements on Facebook are spot on, recognizing that the rather staid and self-centered world of legacy media has come to a cropper at the hands of even more elitist and narcissist institutions: social media platforms.

Murdoch’s statements took aim at Facebook’s offer to “prioritize” news from ‘trusted’ publishers – part of a rebranding effort taken by a social media giant that once attempted to play down or virtually hide the impact of the fake news it was so happy to carry. The contempt for journalists is almost palpable: rather than actively paying for good content (which requires the investment of time and effort, and can put reporters lives at risk), Facebook offers a sort of pat on the head, months after trying to pretend it didn’t help undermine trust in media.

It is often difficult to empathize with the media, which have essentially played big tech’s game of condescending to audiences before the big tech came around.  The New York Times‘ motto, ‘All the news that’s fit to print’, was originally a barb against rivals seen as sensationalism and commercialism – but today it smacks of arrogance. But at the very least the news media – for all of its failures (and there are many) – produced content, rather than acting as a mere conduit. Moreover, when the traditional media has screwed up, someone normally gets the blame for it.

Facebook offering recompense to support better reporting for the news whose advertising revenue it essentially takes would have been a better alternative for a number of reasons. First and foremost, it can afford to, and it would do a lot more than token efforts to fund fact-checking sites. Coverage revealed that unsurprisingly, this was Silicon Valley spin at its finest: a lot of hot air which failed to actively solve the problem, but touted as a shiny new solution. All this, rather than working on a way to support real news.

Secondly, and perhaps more worryingly, is the question of what a ‘trusted’ publisher is. Is this a paper which skews liberal, reflecting Facebook’s ‘beliefs’? (it is difficult to see immense tech firms which do their best to take our data with as little consent as possible as liberal in any worldview but their own, admittedly). Or perhaps, more cynically, it will simply become pay to play on a bigger scale: those who pay some sort of fee will enter the category. Or perhaps, even more cynically, news which is critical of Facebook will quietly vanish whilst puff pieces and press releases will sale to the tops of our newsfeeds.

The most monstrous fact about Facebook is the seeming inability to accept, or maybe even countenance that its immense power could be dangerous – or that folks like Mark Zuckerberg could be distant from normal humans. It truly is a caricature of West-Coast scientific optimism, entirely certain in the progress of science, and entirely blind to its own failures – or the value of any other institutions. The need to ‘disrupt’ and ‘innovate’ can be used to tell yourself what amounts to little more than hijacking advertising revenue, or helping to ensure that fake news is as well disseminated as the real thing, apparently. Once you imagine yourself as the face of the future, why care about anyone else?

Facebook, like all social media, has always relied upon content creators (be they friends and family, or professional newsmakers). In its haste to proclaim itself the only meaningful expression of humanity (as evidenced by Zuckerberg’s tone-deaf attempts to convince the world that he’s ‘fixing’ Facebook in any other way than ensuring it suits his agenda), the social media giant has fundamentally forgotten this. If it really wants to make the world a better place, it would do well to follow the insalubrious but canny Murdoch’s suggestion and support content creators – and accept that just maybe, other people’s work is of value too.

The Messenger-Cryptocurrency Game looks more like desperation than innovation

Reading Time: 2 minutes

“Bitcoin has established itself as the «digital gold», and Ethereum has proved to be an efficient platform for token crowd sales. However, there is no current standard cryptocurrency used for the
regular exchange of value in the daily lives of ordinary people. The blockchain ecosystem needs
a decentralized counterpart to everyday money — a truly mass-market cryptocurrency.”

Thus begins the white paper for Telegram’s own Open Network (TON), “designed to host a new generation of cryptocurrencies and decentralised applications.” It is, to put it politely, a lot of hot air. Bitcoin did not start off as digital gold, for one, but the bigger issue is still that “a truly mass-market cryptocurrency” requires a true mass-market, not buzzwords like disruption hurled at an imagined, credulous public. It reflects poorly on both Telegram and those tech journalists who have enabled these attempts to grab headlines.

Granted, it succeeds in identifying that fewer and fewer vendors in the real world are actually using cryptocurrencies (and for those in countries without such vendors, getting your money out of the system requires you to take all sorts of bizarre transactions). But it’s solution to the problem seems to be ‘you can use cryptocurrencies to purchase things in app!’ The idea that using what amounts to ‘in-game currency’ for services on Telegram will transform the cryptocurrency environment feels like the height of hubris – if Bitcoin and Ethereum couldn’t break into the current banking scene (and not for lack of hype), then the idea that messenger-integrated TON will do so, merely by dint of being messenger-integrated, suggests a somewhat complicated relationship with ground truths.

There is also the question of the security, which Telegram casts as its main selling point. Ignoring the wonderful and salacious claims in the Fusion GPS memo, that the Russian government had compromised Telegram’s vaunted cryptography, we know for sure that hackers in Iran have done so. This does not bode well for the security of all that in-app currency in TON, or in bolstering that mass-market interest in cryptocurrencies more widely. Given the ease with which wallets can be lost, or the frequency with which exchanges turn out to be elaborate scams, it’s not hard to see why ordinary people aren’t so keen to turn hard-earned cash into ‘the next big thing’.

You could write off this decision by Telegram as a sort of fluke – a mistake by a company desperate to stand out from its rivals. This would be overly generous, since Kik (a messaging app most popular with kids and teens) is doing much the same. Yet again, the idea is that somehow, in-app currency will translate into a wider appeal for cryptocurrencies – or indeed, that a messenger app/cryptoexchange is all that’s needed to give blockchain-derived currencies a shot in the arm.

Cryptocurrencies are not beyond redemption, certainly – but the answers to their problems does not lie in the hubris of Telegram or Kik, attempting to build atop a system that is already unstable and then shouting that they are disruptors. If they want to make the blockchain really change the world, they’d try to set about lowering the power consumption (although some, like Ethereum, are doing so); they’d try to work with vendors to see how trust in cryptocurrencies could be created. Instead, they’ve chosen to follow a path of over-blown estimates and wild claims. We’ve seen bubbles before, and they usually turn out poorly.

The Age of Big Data Scandal should not destroy research opportunities

Reading Time: 2 minutes

The Telegraph‘s revelation that Public Health England had been offering data to a firm which was known to work with big tobacco is, rightly, scandalous. At the very least, it shows a level of incompetence in verifying who around 100,000 cases of lung cancer were handed to. At the worst, it suggests a recognition by special interest groups and lobbies that the best way to get their hands on data is to hijack public bodies.

All evidence points to the former, but this is fairly scant relief. The data might be anonymised, but (as The Telegraph and others worried), how much damage could it do in the fight against a very avoidable cause of lung cancer? And equally pressingly is the fact that the word ‘anonymised’ is up for grabs. With sufficient datasets, it’s possible to crack most attempts at anonymisation. And even assuming that William E Wecker, the big tobacco affiliated firm, chooses not to sell on data to all and sundry (a big assumption given the US’s lax laws on data protection), it is questionable just how secure their encryption is.

Big data scandals are not merely the old story of single stolen identities being bandied about: each breach or handover puts more information out there. Your health status, marital status, criminal record (even for minor infractions decades past) can be put on offer, threatening our personal autonomy. Who wants their bosses or advertisers or criminals knowing everything about their lives?

There are several ways we can deal with this. One is to simply ignore the problem, and argue that to put limits on big data collection and transfer is to threaten creativity and innovation. This is the American way, and in a sense, it has its merits – particularly when it comes to research. As academics argued as early as 2009, kneejerk reactions to data breaches threaten key scientific research. Work on massive data sets allows researchers to find patterns that traditional scientific work never could. To clamp down too heavily is to harm this in the longer term.

And yet it is clear that the current position we face is far too lax, and far too keen to ignore the damage down in the here and now to consumers. In that way, the EU has lead the way in legislation to punish those – like Public Health England – hand over data without bothering to check who’s at the receiving end. The fines which they pose are hefty, and for many small businesses, the upheaval which gaining GDPR compliance represents is unwelcome. Yet it also recognises the importance in putting forward user privacy as an integral part of dealing with big data.

We all should have a right to anonymity when it comes to handing over sensitive data: the idea that breaches are a fair trade off is the rhetoric of big companies unwilling to put in the work to actually invest in security. At the same time, we must balance this right with a recognition that real research – the sort which can affect broad social change for the better – requires big data. A balance between trust and innovation when it comes to big data will be key going forward.

GDPR: five guides for the five months to go

Reading Time: 2 minutes

The General Data Protection Regulations (GDPR) kicks in on May 25th, and promise to be one of the most comprehensive shake-ups of how data is handled in history. It’s not just European companies which will be affected: anyone hoping to do business with the European Union and Britain will have to ensure that they are up to scratch. That’s a big boon for customers, who will have access over who gets access to their personal information, but a massive wall for companies who have often played it fast and loose when it comes down to security.

Our Insight piece last month picked up that guides remained the top trending terms with regards to GDPR – so for this blog , we’ve come up with some of the best writing on GDPR – for companies and for customers – showing how businesses can stay within the lines.

Europe’s data rule shake-up: How companies are dealing with it – The Financial Times

A very clear piece, with some case studies of how companies affected by GDPR will have to change their business practices to stay compliant. It also includes an important section on how GDPR shifts the barriers on the right to be forgotten. Where it used to be the responsibility of data controllers to ensure data privacy, data processors (including big companies like Microsoft and Amazon who host data for other businesses) will now be on the spot to deal with the issue: a potentially Herculean task depending on how well their data is kept.

How the EU’s latest data privacy laws impact the UK’s online universe: Tips to prepare your website for GDPR – The Drum

A great step-by-step guide to how to GDPR-proof a website, with a breakdown of potential areas where sites can fall out of compliance. It’s simple, but makes clear how easy it is to fall foul of the new laws without firm preparation.

Rights of Individuals under the GDPR – The National Law Review

A worthwhile read for both users curious about their new rights, and companies who will have to ensure that they are met to avoid hefty fines. These include the right to access any data held on them by an organisation, the right to withdraw consent at any time during the data processing or collection period, and the right to judicial remedy against data controllers and processors.

How Identity Data is turning toxic for big companies – Which 50

Less of a guide than the other pieces but a good read nonetheless for those keen to understand how the information ecology is fundamentally shifting. It points to the increasingly high number of annual breaches affecting large companies – and the fact that the fines levied against them under GDPR will make storing so much data so poorly potentially cost-ineffective.

General Data Protection Regulation (GDPR) FAQs for charities – Information Commissioner’s Office 

A handy piece for charities and small businesses looking to stay compliant, right from the horse’s mouth, including links to the ICO’s self-assessment page and other tools and guides to ensure that businesses don’t stray beyond the lines.

The GDPR present a challenge to existing norms, and businesses will have to step up to stay in check. But they also present an opportunity for ethical data processing, and a greater bulwark against the breaches which seem to plague the tech industry: a vital step, at a time when big tech stands on the brink of moving forwards or falling into the way of old monopolies.

What to Watch in 2018: The Biggest Tech Trends of the Year to Come

Reading Time: 3 minutes

2017 has been a tumultuous year the world over – not least in technology. Between massive hacks of public and private organisations, the death of net neutrality in America, and the massive (and temporary) upsurge in the value of bitcoin and other cryptocurrencies, 2018 might have a tall bill to live up to. Here are the top five predictions for big tech trends over the coming twelve months.

  1. GDPR will set in – and many companies won’t be ready

The General Data Protection Regulation of the European Union (including a post-Brexit Britain) is set to kick in on 25 May, 2018. Looking at the report with which we partnered with Right Relevance, we found that the key terms over the past month were largely focused on guides or webinars to help get compliant, or else on companies like Uber which had suffered catastrophic data l0sses due to poor security practices.

This sign of awareness is encouraging, on the one hand: the GDPR attempts to enforce strict punishments on companies which fail to protect personal data of customers, and will enact equally strict restrictions on what processing can be done with that data. At the same time, with just a matter of months to go until the law comes into effect, there’s a danger that companies underestimate how much they need to do to get compliant. Expect more than a few cases of large companies being hit by data breaches, and having to shell out a lot of money for their errors.

2. Hacking attacks will only get bigger

Ransomware attacks like WannaCry – which hit NHS Trusts, amongst other organisations – and Petya/NotPetya showed both the power of hackers (state sponsored or otherwise), and the unpreparedness of major national entities. Even ignoring the GDPR fines, the situation is grim: unless cybersecurity improves, we are likely to see threats to the national grid and other vital infrastructure.

It’s not even just the Russians who we should be worrying about (although given the probability of the second Cold War getting hotter, nothing should be ruled out): the tranches of tools released by Wikileaks dubbed Vault 7 and Vault 8 show that some very powerful weapons designed by the US government are out in the hands of anyone smart and malicious enough to use them.

3. The Cryptocurrency Bubble bursts (maybe)

Perhaps a bit of a cop-out as predictions go, but there is a strange resilience boasted by the cryptocurrency bubble (which experts have long predicted would pop before the end of the year). The abrupt falls in value have put the value of Bitcoin in flux.

There are two possibilities here: the turbulence frightens enough cryptocurrency enthusiasts that they start to sell to try and cash out, or they laugh it off in the belief that bubbles are impossible in cryptocurrencies. Either way, they’ll be confronted by the reality that fewer and fewer outlets accept blockchain based currencies. If that doesn’t change (and there are no clear reasons it will), it gives way to a third possibility: a slow and painful decline as the money of the future goes back to being a curiosity.

4. The Internet of Things will continue to expand…sometimes, too fast

The idea of an internet of things – where everything you own has a tag in it, allowing it to produce data to maximise your lifestyle – is pretty well established in theoretical circles. With Alexa, Amazon’s speakers/personal assistant, we’ve seen this sort of technology starting to make inroads into our homes.

Expect to see a massive expansion of this over the coming year. Between smart watches, shoes, clothing, water bottles, and so on, the amount of data you’ll have to plan your life will be unrivalled by any period earlier. Not that it’s unproblematic: upstart companies may not think your personal data should be as private as you do (especially if they’re quartered outside the EU). There’ll almost certainly be some consumer battles over that in the coming year.

5. Tech Giants will get into more scraps, more often

We live in strange times, where the technology companies battle over content production and distribution. That was what we saw when Google pulled YouTube from Amazon’s Fire TV devices. It’s a not so subtle reminder that whilst the two companies come from very different backgrounds, it’s digital content which they now struggle over. YouTube, once home to cat videos and amatuers, is increasingly moving towards professional content creation with its YouTube Red – the decision to remove this from Amazon is no little snub.

Then again, Amazon is hardly blameless in the debacle, having removed a number of Google products from its store – including Google Home, a rival to Alexa. Given its predominance in the market of online sales, that’s not a symbolic act of aggression. Expect to see this scuffle – and others like it – as the giants of the technology world increasingly overlap in their industries.

Sesame Credit and the Future of Social Credit

Reading Time: 2 minutes

When it comes to bashing countries for poor internet freedom practices, China usually appears near the top of the list – and with good reason. Perhaps in part that’s because, in contrast with more crude filtering systems adopted in many authoritarian states, the Great Firewall is an almost elegant panopticon. The sheer level of surveillance – and capacities for intervening – can look like an early draft of a Black Mirror episode. Take, for example, the ability to effectively remove images deemed unsuitable for the interests of the state ‘mid-air’. Where the Soviets had to make do with erasing people after the fact, Chinese internet censors can do so on a real-time basis.

Sesame Credit seems, in a sense, to be the obvious outcome of this level of monitoring and capacity for intervention. The so-called ‘social credit’ is opaque in its operation, but from what we understand, citizens will be able to ‘earn’ credits by such patriotic activities as pro-government posts on message boards. A higher score will mean greater perks, incentivising citizens to behave as suits the Communist Party of China.

There is something thoroughly Chinese about this – and not in a negative way. In e-commerce, the country outstrips its competitors with home-grown giants like Alibaba. Granted, they have been grown in a sort of incubator, with Western competitors artificially kept out, but they have achieved success on a scale which surely makes even Facebook or Google jealous. The ease of access to functions through all-in-one apps like WeChat is another example of an approach to the internet with a great number of affordances. On the more positive side, the use of something like Sesame Credit shows a continuing move away from paper money. This was the goal of China’s almost as populous neighbour to the West, through the process of demonetisation. Yet India has largely failed in its bid to go digital: in spite of the number of new digital bank accounts created, the majority (owned by the urban poor) are empty, and the rural poor (with no access to the internet) never had them to start with.

This cannot detract from the cost in terms of citizens’ rights to privacy, or freedom of expression. It also opens up a number of worrying scenarios in which a users social credit could be lowered. A drunken error or a joke made at the expense of the government on a relative’s account, for example, might have an impact; more concerningly, a malicious actor could effectively fabricate dissent. There is also the question of automation . How well can the system deal with bots set up to pump out pro-government posts? Will it lead to inflation (at least temporarily, before accounts are presumably removed)? The lack of adequate information on this front makes this largely guesswork, sadly.

Will social credit in the style of Sesame Credit spread from China, is the final question. Many have pointed to pre-existing systems, like the credit scores which are prevalent across the West – and they have a point. Much like Sesame Credit, when it’s rolled out, they can have immense impacts on our lives and are by no means transparent. And yet largely speaking, our behaviour on Facebook or Twitter has no major bearing on these (we hope). The age of the all-in-one app is yet to hit us – but when it does, there’s no reason to assume that social credit would not be its outcome.

Could 2018 be year we make technological education into something better?

Reading Time: 2 minutes

It always seemed odd that we didn’t do IT at my secondary school after year 7 (the first year). We had a rudimentary play around with PhotoShop, made mindmaps and mock web pages – and then it abruptly ceased. The assumption was that we’d pick up the computer skills which we needed along the way.

On the surface, that was largely true: I don’t think that our class was disadvantaged as netizens by the lack of an IT course. And, from glancing at a syllabus for GCSE ICT, we probably didn’t miss out on much: questions about whether text is left or right justified, or knowing the name to a USB, is of limited value (and not just because everything’s on the Cloud now).

But ICT teaching is increasingly more than just about learning the parts of a machine, or even learning to code. Understanding computers and the internet is more than just an academic or abstract skill: it’s practically key to citizenship, and understanding our rights (and how best to safeguard them).

We live in an eminently teachable era for this too: with the onset of GDPR, in just a matter of months, raising a generation to understand the importance of personal privacy is key. Rather than waiting for pupils to be faced with the most unpleasant examples of abuses of trust (in the form of revenge porn), good technological education can directly inculcate wariness about over-sharing online.

The same goes for more complex issues, like algorithms. Granted, Facebook may no longer be the hippest space for youth culture, but its dominance can’t be ignored; nor, in spite of its inability to turn a profit, can Twitter’s. Both of these spaces have algorithms with deeply questionable biases, which allow for the creation of echo chambers – and for deeply unhealthy scrolling habits. A good education wouldn’t tell students not to use these platforms (that would only enhance their counter-cultural appeal): it would instead encourage critical thinking, from an early age. Ignoring the political cycle of defeated parties trying to reach out and become more like their opponents, there is a possibility of avoiding the cognitive dissonance which seems to mark modern politics.

The boons wouldn’t just be for students as consumers – encouraging a better respect for privacy and ethics when it comes to data would also support the companies they might work for or use in the future. Privacy by design is a good idea – in practice. In reality, our current education system rarely prioritises this thinking except in academia or research firms. Having students grappling with these major issues from school could offer a workforce fully committed to the values of good security.

Computing is difficult to understand; the internet is even more so – but that doesn’t mean that we should ignore them, or treat them like a sort of black box, which is inexplicable. By crafting ICT programmes to not merely get across code, but show the power structures and politics behind digital life, we can offer something as valuable as teaching hard sciences, or the humanities, or citizenship – if not more so.

Can Facebook Overcome its PR Nightmare?

Reading Time: 3 minutes

Last week, I decided, on a whim, to listen to the Alex Jones Show (There’s a war on for your mind!). Texan’s favourite son (who, on the scale of desirable export, comes somewhere between Halliburton and Texas Chainsaw Massacres) wasn’t on. Instead, his replacement that night was lambasting ‘the Facebook’ for trying to get parents to sign their kids on. Of course, there was a thick layer of tinfoil slopped on top, but it really is a disturbing truth that Facebook (and other platform media) manage to make people who sell snake oil for a living look sensible (the excellent Zeynep Tufecki wrote an article on Mark Zuckerberg’s defence that annoying practically everyone was a feature, not a bug).

Few companies would brazenly admit that a study did show that using their service made people unhappier – and then make the claim that the problem was not with the technology, but instead with how they use it. That was Facebook’s latest ploy, in a blog post which referenced a recent Yale study which claims to show that passive consumption (i.e. scrolling through your newsfeed) makes you unhappy, but which attempted to deflect this with other research which claims that more active behaviour, like direct messaging, had the opposite effect. All science aside, the piece felt distinctly tone-deaf: a philosophical posing, rather than anything concrete.

It wouldn’t be the first time the company had made itself the target of ridicule and contempt for its heavy-handed behaviour. Zuckerberg’s immediate declaration after the 2016 American election, that misinformation and junk news like the sort which bombarded swing states throughout that period could have had no discernible impact, was readily mocked. A year later, he walked back this comment; at the same time, it became clear that to at least some extent, Russia had been involved in buying ads from Facebook. That’s not to mention the revelation that the social media platform had allowed targeted ads for anti-Semitic and other bigoted ideologies.

Facebook can get away this because in spite of growing competitors, years of bad publicity, and falling market share, it still has the sheer numbers of users. Leave it, even temporarily, and you lose out on a space for mundane but key fixtures such as events and birthdays. The social cost of going ‘off the Facebook grid’ is, admittedly, mitigated by the plethora of alternative systems, but that doesn’t always mean everything is posted everywhere. Of course, as newer generations come online, they may reject the Facebook mantle in favour of platforms like Snapchat, whose affordances (based around the idea of ephemerality) do not offer so much space for passive consumption. But for the time being, we seem to be stuck with Facebook, warts and all: a company increasingly disliked by its user base, even as the more time you spend on it, the more valuable it becomes as a social tool.

It doesn’t have to be this way. Facebook’s core values are a major part of the issue, I’d argue: the kind of Silicon Valley libertarianism, which views criticism of over-reaching features as attempts to stifle creativity, and which has total belief in itself. There’s a reason that a common enough internet joke is that Mark Zuckerberg is an alien or a robot: the sense of disdain for users, and their interests and privacy, is almost palpable.

Rather than putting out posts which try and defend Facebook’s apparent promotion of damaging users’ mental health, the service could offer more limited models, or ways in which to actually limit people’s exposure to passive content. It could act faster, when it becomes clear that misinformation is being spread through its adverts; it could also be pre-emptive in cutting down on language used by those attempting to sow dissent and hatred.

It would come at a cost to advertising revenue, most likely: the key lifeline which keeps the company afloat. But it would offer a chance for it to move out of the PR rut it has fallen into, and perhaps start to retake market share. Being ethical doesn’t always pay off immediately; being unethical is rarely healthy in the long run.