How Will Mobile Apps Make Business Easier

Reading Time: 4 minutes

The field of mobile apps is growing as we speak, mostly in two different ways. One way is the app development market that employs millions of people globally. The other way mobile apps affect the modern business realm is their use and application (sic!) for various business purposes. As new technologies are being developed, they’re also finding their way to mobile implementation. So, let’s read a bit more about the further potential of mobile apps in the world of AI, AR and other tech innovations.

 

1) Artificial intelligence and mobile apps

Smartphones, tablets, and wearables already function via smart software tools that can learn some of the patterns established by their users. However, this is only the tip of an iceberg. In other words, AI will change the market of mobile apps.

For starters, our living and work habits will be memorized and gathered by the means of AI. In turn, this will enable their creators and mobile providers to prepare various ready-made offers for our daily routines. Since many of our decisions and actions will be anticipated, all these tech innovations should lead to a more productive work day and well-organized free time.

Apart from that, app developers will have a chance to use these AI-collected data to do QA testing. As a result, they’ll save more time that can be invested in solving complex UX and functionality problems. AI features will be here to do the tiring coding tasks and analyze the UX-input gathered from customers.

Nevertheless, all these innovations could have negative effects on our privacy if our private data aren’t collected and stored in accordance with the legal guidelines. That’s why every app developer will need to take into consideration the GDPR act. Still, if you follow these rules, you’ll benefit from these tech innovations, including the AI-features.

 

2) Augmented reality in business apps

The growth of eCommerce industry has taken the global retail market by storm. Most renowned vendors already use apps, in addition to their business websites, to make their products available for shopping on the go.

Things are moving even faster today, especially with the introduction of augmented reality in eCommerce.

The greatest benefit of AR in this context is the ability to use its features to make shopping even simpler and less expensive for customers. For instance, Amazon has introduced AR-features in their app. What you can do here is simply project the item you’d like to buy on this website in the space you’d like to place it.

Similarly, car dealers are also taking the plunge into AR in their everyday work. Car buyers don’t have to go round countless car shops these days. They can simply use AR-features via dealers’ mobile apps to try new vehicles. Read more about these AR-trends in the article on The Drum website.

The downside of AR is that it’s still expensive for many SMBs. However, this will change sooner than we might expect, which will enhance the productivity of smaller business enterprises.

 

3) Accounting benefits of mobile apps

Small business owners often have issues with accounting demands. From their in-house books to bank accounts, to tax returns, more often than not they omit to process some data. These mistakes can result in inaccurate accounting data and financial penalties from the tax authorities.

The good news is that there are literally thousands of accounting mobile apps that will make your business life easier.

Still, this large number of apps calls for caution. Naturally, the best way to avoid any risks in this field is to use the mainstream apps, such as QuickBooks or FreshBooks. Both these tools have top-notch apps, plus they also have distinguished cloud features which makes them perfect for new business owners.

Apart from that, you can use modern accounting tools on your phone to simplify the payment procedure. In line with that, it’s wise to keep an online invoice maker at the touch of a finger. Every time you need to cope with larger orders or payments, you can issue an invoice in no time and speed up the purchase.

 

4) Increased productivity with mobile apps

Mobile apps have already improved our work productivity. Take only the accounting apps described in the previous paragraph. If you can use them on your mobile on the go, they enable you to deal with the business paperwork when commuting home from work or when you’re waiting in line in a supermarket.

Moreover, mobile apps enable SMB-owners and their workers to constantly communicate about their projects. What’s more, many project management tools come with mobile apps, as well. So, you have all-in-one solutions for work organization, time management and data share. Now imagine how advanced all these tools will get when AI, AR and other cutting-edge tech features become fully implemented in them.

Also, using mobile apps in various business ventures enables their owners and employees to collaborate remotely. This option opens an immense number of possibilities for employment, cooperation and better connectivity in terms of business productivity and operability. In the future, these features will lead to further improvements when it comes to work conditions and efficiency.

 

Conclusion

The number of mobile users is already counted in billions. The advancements in the production of smart devices and apps will lead to further growth in these figures. The improvements of mobile apps develop simultaneously with the number of mobile users. The combo of these two trends will produce a more engaging and inspiring work environment in the future, which will yield benefits for business owners, their employees, and, finally, the users of their services. That’s why we should all be looking forward to the app-enhanced business future.

 

This blog post was written by our guest,  Mark who is a biz-dev hero at Invoicebus which you can also follow on Twitter

Cyborg Chess and What It Means

Reading Time: 3 minutes

When arguably the greatest chess player of all time, Garry Kasparov, was beaten by Deep Blue in 1997, some took it to mean that human intelligence had become irrelevant. For instance, Newsweek ran a cover story about the match with the headline “The Brain’s Last Stand.” However, the chess-related conflict between human and computer cognition turned out to be somewhat more convoluted than that.

In the wake of the match, Kasparov came up with a concept he called “Advanced Chess,” wherein computer engines would serve as assistants to human players—or the other way around, depending on your perspective. Kasparov’s idea was that humans could add their creativity and long-term strategic vision to the raw power of a computer munching through plausible-seeming variations. He thought that, perhaps, in long games, such cyborg teams could beat computers, complicating the idea that human intelligence had simply become obsolete.

He was right. Highly skilled cyborg players turned out to be stronger than computers alone. Most famously, in 2005, a cyborg team won a so-called “freestyle” tournament—one in which entrants could consist of any number of humans and/or computers. And, even more surprisingly, the tournament was won by a pair of relatively amateur players—Steven Cramton, and Zackary Stephen, both far, far below master strength. They came out on top of the powerful program Hydra, as well as esteemed grandmasters like GM Vladimir Dobrov. And the secret to their success seemed to be that they were the best operators—they had figured out the ideal way to enhance the chess engines’ intelligence with their own.

In other words, for the human half of a cyborg team, being a supremely good chess player wasn’t as important as knowing how to steer computer intelligence. AI manipulation was itself a relevant skill, and the most important one. Cramton and Stephen ran five different computer programs at once—both chess engines and databases which could check the position on the game board with historical games. Using this method, they could mimic the past performances of exceptional human players, play any moves that all the engines agreed upon, and more skeptically examine positions where the different engines disagreed upon the right way to proceed. Occasionally, they would even throw in a slightly subpar but offbeat move that one of the programs suggested, in order to psychologically disturb their opponents.

This is kind of a beautiful picture of computer-human interaction, in which humans use computers to accomplish cognitive tasks in much the same way that they use cars to accomplish transportation. However, there’s a strong possibility that this rosy picture won’t last for long. It’s possible that, eventually, chess engines will get strong enough that humans can’t possibly add anything to their strength, such that even strong operators like Cramton and Stephen would, if they tried to provide guidance, only detract from the computer’s expertise. In fact, this may have happened already.

In May of 2017, Garry Kasparov said in an interview with Tyler Cowen that he believed cyborg players were still stronger than engines alone. However, that was before Google’s AlphaZero chess engine, in December of 2017, absolutely destroyed a version of one of the world’s best chess programs, Stockfish. AlphaZero, which was grown out of a machine learning algorithm that played chess against itself 19.6 million times, won 28 out of the match’s 100 games, drew 72, and lost not one.

What was more notable even than AlphaZero’s supremacy was its style. AlphaZero played what seemed like playful, strange moves: it sacrificed pieces for hard-to-see advantages, and navigated into awkward, blocked-up positions that would’ve been shunned by other engines. Danish grandmaster Peter Heine Nielsen, upon reviewing the games, said “I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.” If there’s any computer that’s exceeded the capacity of cyborg players, it’s probably AlphaZero.

This progression—from the emergence of strong AI, to the supremacy of cyborgs, to the more complete supremacy of even stronger AI—could pop up in other fields as well. Imagine, for example, a program, Psychiatron, which could diagnose a patient’s mood disorder based on a momentary scan of their face and voice, searching for telltale signs drawn from muscular flexion and vocal intonation. That program would make psychiatrists irrelevant in terms of diagnostic process.

However, you might still need a psychiatrist to make sense of Psychiatron’s diagnostic to the patient, and provide that patient with a holistic treatment that would best address the many factors behind their disease. Psychiatron would simply enable psychiatrists to be better. Eventually, though, that cyborg team might be superseded by an even stronger Psychiatron, which could instantly dispense the right series of loving words upon making a diagnosis, as well as a carefully co-ordinated package of medications and an appropriate exercise plan, all through machine learning techniques that would be completely opaque to any human operator.

This is a version of the future that’s either utopian or nightmarish depending on your perspective—one where we are, as Richard Brautigan wrote, “all watched over by machines of loving grace,” who, like parents, guide us through a world that we’ll never fully understand.

Cryptocurrencies and the True Source of Value

Reading Time: 5 minutes

One of the arguments against Bitcoin and cryptocurrencies in general is that they do not represent true value. Behind the crypto-algorithms, according to this line of argument, is really nothing that could objectively be considered currency; indeed, nothing at all. Hence, cryptocurrencies are a bubble which is bound to burst. This is not just an any-man-on-the-street opinion; it has been espoused by the billionaire investor Howard Marks, who predicted the “dotcom bubble” of the 1990s. “In my view, digital currencies are nothing but an unfounded fad (or perhaps even a pyramid scheme), based on a willingness to ascribe value to something that has little or none beyond what people will pay for it,” Marks said in 2017. Marks used historical precedent to underscore this point, pointing to the notorious “tulip mania” that started in the Netherlands in the 17th century. In 1637, at the height of the mania, a single tulip bulb could be worth up to ten times the annual income of a skilled craftsman.

Lydian coin. Inscription reads “I am the sign of Phanes”. Electrum (alloy of gold and silver), length: 2,3 cm. Late 7th century BCE, found at Ephesus. Israel Museum, Jerusalem.

One might wish to consider other historical precedents, however. Currencies per se are a surprisingly recent invention in human history. According to the archaeological record, the first coins were used in Lydia (present day Turkey) in the 7th century BCE (see image above). This is long after the rise of cities and kingdoms and indeed the successful smelting of metals, including gold, silver and bronze; even 500 years after the commencement of the Iron Age in the Middle East. We also know that this was not due to lack of technical engraving ability, since many small metal seals with intricate designs have been found dating from many centuries prior to the 7th century Lydian coins (see image below).

Seal of Tarkummuwa, King of Mera. Silver (diameter: 4.2 cm). c. 1400 BCE, found at Smyrna. Walters Art Gallery, Baltimore.

The anthropologist David Graeber has provided an interesting explanation of why coinage was eventually developed. Coins were not initially used by most ordinary people, he argues. The available archaeological evidence shows that the first coins were used by soldiers. This makes sense, Graeber argues, when we consider that ancient rulers had to find a reliable way of feeding armies at the frontier of their empires. If the soldiers were stationed inland, he points out, it would be extremely difficult to move large amounts of grain or other foodstuffs with them. If, however, standardised coins could be minted and given to soldiers, the soldiers would be able to buy the necessary food from the ruler’s civilian subjects in these far-flung parts of the empire. By taxing his subjects, these metallic tokens of value would then be returned to the king. They began as a more efficient way of feeding armies, but once they acquired universally recognised value within the state, could be applied to any economic transaction.

In order to be hard to forge, coins had to be minted out of rare metals by skilled craftsmen. But even gold, silver and copper, which were used for the earliest coins, have no intrinsic value, as Israeli historian Yuval Noah Harari points out – “you can’t eat it, or fashion tools or weapons out of it.” The lesson here is that no form of currency has value above and beyond what we ascribe to it, collectively, as human beings. Thus, it will not do to dismiss a cryptocurrency, as Marks does, because it has no value beyond what people will pay for it (this is not to say, of course, that other arguments against cryptocurrencies fail; only that this particular line of argument is unconvincing). One might well imagine an ancient Lydian exclaiming, “These bits of metal with their fancy designs and inscriptions have no real value. The whole fraud will surely collapse after the king dies.” And yet, as we now know, it did not turn out that way. The coins had value because enough people came to believe that they did and that was all that mattered.

We have since, although only relatively recently in 1971, abandoned the gold standard, making way for the the US dollar as the world’s reserve currency. One could even argue, as some have, that the dollar is a less reliable store of value than either gold or Bitcoin, because the US Federal Reserve can simply print as many units as it sees fit – and indeed, in the last round of quantitative easing since the 2008 crash, it has been printing an unprecedented number. The amount of gold in the world runs up against physical limitations, whereas the amount of Bitcoin runs up against mathematical ones. While it is true that other cryptocurrencies can avoid the same limitations that appear to be built into Bitcoin, matters such as the total number of units to be issued and the value of each unit relative to everything else still depend on the vital criterion of consensus by the community of users. Notice that the technological aspects aside, this criterion also applied to the very first currencies used by our species. While it is true that the first coins were issued by rulers in a top-down fashion, these rulers did not realise that they had brought into being a monetary system that would soon escape their control. As Graeber also notes, after appearing in Lydia, coinage soon emerged independently in differently parts of the world. This meant that when different empires came into contact with each other, they had to arrive at a fair exchange rate. If the empires were of roughly equal power, this could not be determined by either of their rulers and was determined instead by market factors beyond any one individual’s control. Exchange rates between different official currencies have thus continued to fluctuate from ancient until modern times.

Bitcoin and other cryptocurrencies could indeed be seen as the next logical step: prior to their emergence, the only “non-physical” medium of exchange resembling a truly global currency was the IMF’s “Special Drawing Rights” or SDRs, although as their name suggests these have only been issued and used in exceptional circumstances. Better yet, unlike SDRs, cryptocurrencies are not controlled centrally in any way. Instead, they are designed to bypass both governments and banks. All they require is a public ledger, the blockchain, to keep track of all transactional information. Governments and banks understandably find this frustrating and will likely do all they can to bring cryptocurrencies under their control. In this respect, however, they may resemble a Lydian king who tries to fix the prices of various commodities, only to find his attempts frustrated by his subjects, who find roundabout ways to buy or sell commodities at market prices.

The fact of the matter is that we are now all living in a global economy, and cryptocurrencies have beaten the IMF to the finish line of establishing imaginary units of value that are created (or “mined”), recognised and used globally. One or even all of them may collapse eventually, but the point is that such an event cannot be brought about by governments or banks. The technology is now out there, as is the will to avoid the fiats of governments or banks. And if they do collapse irreversibly, that is not necessarily good news for fiat currencies. The need for an independent global currency will likely persist even in their absence, perhaps leading to a return to something like the gold standard. In any event, when we go back to the very root of currencies and what makes them valuable, we may well discover a counter-intuitive (at least, to some) truth: that both gold and cryptocurrencies are better placed as stores of value than fiat currencies, such as the pound, dollar or euro.

There is No Solution to the Problem of “Fake News”

Reading Time: 4 minutes

In the aftermath of the 2016 election, the term “fake news”, seldom heard previously, became ubiquitous. This was, of course, no coincidence: the unexpected victory of Donald Trump cried out for an explanation, and invoking the concept was one such attempt by the president’s many critics, who could not bring themselves to face the possibility that he won fairly. As one conservative commentator saw it, “just as progressive ideas were being rejected by voters across the western world, the media suddenly discovered a glitch which explained why. Fake news is the new false consciousness.” But the dissemination of disinformation and propaganda is as old civilization itself. The internet is merely a new means of spreading these, and even then, not especially new. Consider, for instance, the anti-vaccination and “9/11 truth” movements of the preceding decades, and the role played by the internet in amplifying the noises of otherwise small groups of dedicated ideologues or charlatans. So we are still left wondering: why only in the last few years has the term “fake news” entered public discourse?

A possible answer is that the point has been reached at which traditional purveyors of news feel that they no longer have control over broader narratives. Their sounding of the alarm over “fake news” is thus a desperate rallying cry in order to regain this control. Some have drawn an analogy to the invention of the printing press in the 16th century, which also revolutionized the spread of information and led to the Protestant Reformation (and of course, disinformation, such as exaggerated accounts of the horrors of the Spanish Inquisition). From this perspective, it is futile to resist the changing ways in which information spreads. One must adapt or die. In many ways, Donald Trump, who began his presidency fighting off a cascade of “fake news” allegations, including about such petty matters as the size of his inauguration crowd, has done a better job of adapting to the new informational eco-system. Twitter, with its 280–until recently, only 140–character limit, has turned out to be the perfect medium for a president with a reportedly short attention span. He also uses it to bypass the mainstream media in order to reach the public directly with his own message or narrative. And the president has masterfully turned the weapon of “fake news” around, aiming it right back at the media. At the end of 2017, his first year in office, he seemed to relish releasing the “The Highly Anticipated Fake News Awards”, a list of misleading or false anti-Trump news stories undermining the media’s insistence that it is impartial.

For all its faults, however, the mainstream media does have a legitimate point about the dangers of “fake news”. There must be an objective standard against which all purveyors of news are held and there does need to be a common set–or at least core–of facts upon which all rational parties in society can agree. But this is easier said than done, and it is far from obvious that there is a “quick fix” solution to this problem that does not merely favor one set of news purveyors over another, based on criteria other than factual accuracy. For example, many in the US fear that the Federal Communications Commission’s (FCC) proposed changes to “net neutrality” rules will give a few major companies the ability to speed up, slow down or even block access to certain web addresses or content. Comcast, for instance, is simultaneously the largest television broadcasting company, through its National Broadcasting Company (NBC) channel, and the largest internet service provider in the United States. Should the current FCC chairman’s plans to end “net neutrality” succeed, this will put Comcast in a powerful position to regulate–effectively–much of the online media landscape according to its own financial interests as a news organisation.

Social media companies such as Facebook have come under fire for spreading “fake news.” Although Mark Zuckerberg initially argued that Facebook is a tech platform and not a media company per se, he was eventually forced to concede that whatever he had originally intended the company to be, an increasing number of people around the world did in fact get their news primarily from their Facebook newsfeed and that Facebook therefore had a “a responsibility to create an informed community and help build common understanding”. Behind this corporate newspeak must also lie a very real fear that government regulation of Facebook as a media company could end up crippling its business model. If Facebook could be held liable for the spread of false information, it would need to hire thousands of fact checkers to nip this in the bud whenever it occurs, but doing so would be far too costly for the organisation, to say nothing of the practical challenges involved. Thus, it has had to rely on very imperfect “fake news” detection algorithms, and more recently, a deliberate de-emphasis of news altogether, the idea behind this being to return the platform to its original purpose of connecting friends and family.

But it is gradually dawning on many people that the war on “fake news” may be unwinnable. This is because there is no in-principle solution to the age-old philosophical problem of how to know what is true. If anything, this problem has become vastly more difficult now that there is an abundance of information to sort through, presented to us in a non-random–but not necessarily truth-tracking–way. We would all do well, however, to exercise greater skepticism in response to all truth claims, including official ones, such as the vague claim that Russia “hacked the election”. Skepticism does not come naturally to human beings, who are notoriously credulous. One should thus be taught to be skeptical from a young age, and to favor logical consistency and empirical evidence over other considerations when evaluating competing truth claims. This approach falls well short of a real solution, but it may help us individually and collectively to navigate the treacherous ocean of information in which we find ourselves. Hopefully, we will find ways of adjusting to our current information environment and a new equilibrium will emerge from the informational chaos. Cronycle is one platform that is ahead of the curve in this respect: it not only recognizes the problem of information overload, but provides its users with useful tools for finding the trustworthy, high quality content out there in the Wild, Wild Web.

The Strava heat maps are a grim reminder of Big Tech’s power

Reading Time: 3 minutes

The lone hacker used to be the stereotypical threat to national security: think of Kevin Mitnick, who prosecutors claimed could have started a nuclear war by whistling into a phone. Then it was the citizen activist, whistleblowers (perhaps affiliated to WikiLeaks). Then it was the state-sponsored group – perhaps Chinese, perhaps North Korean, perhaps Russian. The Stuxnet virus (which damaged equipment in Iran, and which most likely was a result of US-Israeli teamwork) showed that the digital world could have a very real impact on the physical, putting a whole new urgency on the need to keep computer security levels as high as possible.

So it seems almost anticlimactic that it was an app for tracking where users were running that has undone so much secrecy for the US military in particular. It’s a lesson that perhaps the greatest threat to operational security lies in the treasure trove of data which we often unwittingly produce – and which private companies, in jurisdictions with limited governmental oversight, often think little about.

The case of the app, Strava, is almost farcical: the company decided to publish heat maps of its users to show off it the success and got more than it bargained for when the visualization effectively gave detailed maps of routes in military bases around the world. Much of that data may be of limited use to opposing nations or non-state actors (particularly where Strava doesn’t seem to have been so heavily used). On the other hand, in spaces where the heat maps are bright, the information visualization essentially sketches out a handy blueprint for troop movement. At its worst, it highlights locations where military forces were not known to be. It is, to put it lightly, a fairly horrifying outcome for the US armed forces, since Strava is far less heavily used by opponents (whether they use different apps or simply lack activity counters remains to be seen).

Strava, in a move either impossibly brave or impossibly foolhardy, at first went on the defensive and suggested that military personnel should have opted out. There is a nice logic to this for the company (who have kept the heat maps up on their site), since it puts the onus onto the consumer. This is the stance taken by social media platforms like Facebook, whenever an embarrassing event has come to light: you should have put your privacy settings higher!

This is also a neat effort to glide over the conflict of interest in this argument: lower privacy settings equal greater data collection, which means more information to sell on to other companies. By putting privacy as low as possible by default, and by ensuring that changing this is not a simple process, companies like Strava get to have their cake and eat it. The very fact that Strava has now offered to change how its privacy settings work should not be read as a company owning up to its mistakes: it’s a PR move and volte-face as part of an attempt to cover up its own shortcomings.

Designs are not just confusing because of incompetent programmers – in fact, the opposite can be quite true. For too long Big Tech has gotten away with hoovering up Big Data, before belated fixes which do little to help with all the information collected. This is only going to get worse as we head further into the age of the Internet of Things, where privacy policies will be increasingly obscure, and opting out of a small, screenless device will be practically impossible. Whether its consumer pressure or a governmental crackdown (such as a heavy enforcement of the GDPR on the most egregious offenders), the Strava story is just another case of Big Tech’s disregard for all of us except in how much our data sells for.

Big Tech’s gamble is wearing thin – and why that’s bad news for us

Reading Time: 3 minutes

It’s now 22 years since Section 230 of the Communications Decency Act came into force: a landmark case even in 1996. It meant that those hosting information on the then nascent internet were not to be treated as “publisher or speaker” of any of that information. This made perfect sense at the time, when the internet remained a small curiosity, and in which it was fair to assume that small, practically amateur companies would have had difficulty managing to scan all the content they were hosting (the early giants like AOL even then seemed like a different story, but that seems by the by now).

Online content – and the avenues in which it is created and uploaded and perhaps most importantly, disseminated – has only grown since then. We are past Web 2.0, with its blogs and microblogs, and into the Internet of Things – and yet we are plagued with some very pre-digital problems online: child abuse, and terror, and defamation (admittedly in new forms, such as revenge porn). The tech industry’s answer has usually been to shrug when confronted with this sort of material. There’s the free speech argument, and then there’s the legal backing which legislation like Section 230 (and its national/regional equivalents) offer.

British Prime Minister Theresa May’s speech at Davos stands in rude contrast to this civil libertarian ethos. This is not a shock: May views the internet as another sphere which requires total government control and regulation, a mere expansion of the offline world. This is not a new view, and it’s one rooted both in government over-reach, and in a complete lack of technical knowledge. Consider Amber Rudd’s desire to combat the evils of encryption without having any understanding of how it works: there’s a rank arrogance to the professional politician which is only matched (fittingly) by that of the big tech company.

The collision course between the two has long been set, but it’s increasingly clear that public opinion has turned against the argument for laissez-faire. Facebook, Twitter et al long assumed that the utility they offered could trump governmental arguments that they should be regulated more heavily. However, a slew of stories about objectionable content – such as terror and borderline child abuse on YouTube, or targeted ads used to support hatred on Facebook – have increasingly eroded this position.

And that’s bad news for users. The current scheme is broken, admittedly (content providers seem to care much more about PR after scandals than actively working on solving major structural problems), but heavy government regulation is concerning at best. At worst, we can expect to see a quiet creep of illiberal regulations under the guise of national security. Lest we see this as too much of a conspiracy, let’s not forget that in the wake of the Snowden revelations, the British government decided to consolidate mass surveillance powers. By failing to self-police, big tech has fundamentally removed its own popular support base, allowing governments which don’t understand technology and which seek to gain more power in the name of national security an open goal.

Content providers and platforms are in no ways victims here – they are equally complicit in what amounts to a rising risk to their users. If they wish to truly avoid over-regulation, they need to move beyond the sort of measures which are patently designed to improve their appearance. Consider YouTube’s plans to fund counter-terror videos: does anyone really believe this will stop someone moving down the path towards radicalisation? A greater emphasis on moderation which goes beyond horribly underpaid contracts (with no support) or crude algorithms may be the only way to save them – and us – from a future which looks a little Orwellian.

Does the Future of Religion Lie in Technology?

Reading Time: 7 minutes

Very little these days seems untouched by technology. Indeed, people’s lives are so saturated with it that they sometimes speak of “withdrawal symptoms” on those increasingly rare occasions when they find themselves without internet access. Some try to escape it at least some of the time, for instance, by “disconnecting” for a day or more when on holiday or on a retreat. Yet surely, one might think, religion is one area that remains largely untouched by technology. This is certainly true of the Amish or ultra-Orthodox Jews, who are outright suspicious of any new technology. But it is true of mainstream religion as well. The eternal “flame” in synagogues is now often electric, churches have replaced candles on their Christmas trees with electric lights, and the muezzin’s call to prayer is often amplified by a loudspeaker. These changes, however, are trivial. Ancient religions have shown themselves able to incorporate technology into their practices, without disappearing or changing beyond recognition. It seems, then, that technology does not directly threaten religion.

Nevertheless, throughout most of the Western world, the churches are empty. Declining church attendance certainly seems to be correlated with technological advancement, but is there a causal connection? Perhaps the further factor causing both is the triumph of the scientific worldview. This laid the ground for the discoveries of biological evolution and Biblical criticism–which pulled the carpet from under religion–as well as for rapid technological advancement. What makes Western societies different from non-Western ones is that the former experienced both of these processes simultaneously. The latter, by and large, experienced only the latter. It is possible, however, that in the coming decades, the whole world will secularise as all societies move toward the scientific worldview. The question then is whether religion dies out and mankind continues without it, or whether one or many new religions are born into the vacuum that will be left.

Already there are some indications of what these religions might look like. The Way of the Future (WotF) “church” was founded in 2015 by self-driving car engineer Anthony Levandowski, on the “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” Although this doctrine sounds somewhat comical and far-fetched, when one unpacks it a little, it begins to make sense. Through all the data and computing power to which it had access, Levandowski’s AI would be, for human intents and purposes, omniscient and omnipotent. We would believe in its ability to answer all our questions and solve all our problems and “worship” it by doing whatever it required of us in order to perform these tasks better, including handing over all our personal data. We would also do well to show proper respect and reverence towards this AI-godhead, so that it might in turn have mercy upon us. “There are many ways people think of God, and thousands of flavors of Christianity, Judaism, Islam,” says Levandowski, “but they’re always looking at something that’s not measurable or you can’t really see or control. This time it’s different. This time you will be able to talk to God, literally, and know that it’s listening.”

The Israeli historian Yuval Noah Harari, in his book Homo Deus: A Brief History of Tomorrow, distinguishes between two main types of “techno-religions”: “techno-humanism” and “data religion” (or “dataism”). Although he uses the term “religion” more broadly than most other writers would (e.g. he considers capitalism a religion), his discussion is helpful here. WotF would fit into the former category, since it posits that humans would continue to exist and to benefit from AI. “Dataism”, however, as Harari puts it, holds that “humans have completed their cosmic task, and they should now pass the torch on to entirely new kinds of entities.” (Harari, p. 285) This is more in line with what has been called the singularity, the point at which humans either merge entirely with AI or are eliminated by it – perhaps due to their inefficiency. It is of course entirely possible that techno-humanism is only a stepping stone on the way to the singularity, in which case it is indistinguishable from data religion, but Harari leaves this possibility open. Levandowski, too, dislikes the term “singularity”, preferring to speak of a profound “transition”, the end point of which is not at all clear.

A central tenet of “dataism” is that data processing is the highest good, which means that it evaluates everything in terms of its contribution to data processing. As a further corollary, anything that impedes the flow of data is evil (Harari, p. 298). With this in mind, Harari proposes Aaron Swartz as the first martyr of “dataism”. Swartz made it his life’s mission to remove all barriers to the free flow of information. To this end, he downloaded hundreds of thousands of articles from JSTOR, intending to release these so that everyone could access them free of charge. He was consequently prosecuted and when he realised that he was facing imprisonment, committed suicide. Swartz’ “martyrdom”, however, moved his goal a step closer, when JSTOR, in response to petitions and threats from his fans and “co-religionists”, apologised for its role in his death and agreed to allow free access to much of its data (Harari, p. 310).

These ideas about future techno-religions are all interesting, but they seem to miss at least one key feature of religion, namely its continuity with the past. Much of religious belief and practice is concerned with events that occurred in the past and are re-enacted through common rituals. Nicholas Wade, in his book The Faith Instinct, argues that religion has evolved gradually throughout human history (and pre-history). According to his thesis, religion “evolved for a single reason: to further the survival of human societies. Those who administer religions should not assume they cannot be altered. To the contrary, religions are Durkheimian structures, eminently adjustable to a society’s needs.” (Wade, p. 226)

He observes that every major social or economic revolution has been accompanied by a religious one. When humans were in their most primitive state, that of hunter gatherers, they were animists who believed that every physical thing had an associated spirit. Their rituals included dancing around campfires and taking hallucinogenic drugs in order to access the spirit world. With the agricultural revolution, humans developed calendars and religious feasts associated with the seasons. They came to worship a smaller number of “super-spirits”, or gods, often associated with agriculture, for example Demeter the Greek goddess of the harvest. The next phase of this revolution was increasing urbanisation, which began in the Middle East. As cities gave rise to states, and states to empires, the nature of religion changed again. It needed to be organised in a unified and centralised manner, and as the Roman emperors eventually discovered, Christianity was more conducive to these requirements than paganism (in the Far East, Buddhism, and in the Near East, Islam, fulfilled much the same function). The Protestant Reformation happened at approximately the same time as the voyages of discovery and the expansion of European empires around the world. This new form of Christianity placed greater emphasis on the individual, and so ushered in capitalist free enterprise. The Industrial Revolution then followed, which was the last major revolution until the present Information Revolution, as it might be called. Yet in the approximately 200 years since then, no new (or rather, updated) religious system has yet emerged. As Wade suggests at the end of his book.

Maybe religion needs to undergo a second transformation, similar in scope to the transition from hunter gatherer religion to that of settled societies. In this new configuration, religion would retain all its old powers of binding people together for a common purpose, whether for morality or defense. It would touch all the senses and lift the mind. It would transcend self. And it would find a way to be equally true to emotion and to reason, to our need to belong to one another and to what has been learned of the human condition through rational inquiry. (Wade, p. 227)

One might wonder whether techno-religions would be up to the task. Notice, however, that all the previous religious transformations were gradual – so gradual, in fact, that many people who lived through them may not even have noticed them. We still see evidence of this in the many pagan traditions that were incorporated into Christianity, for example, the Christmas tree, which probably derives from the ancient Roman practice of decorating houses with evergreen plants during the winter solstice festival of Saturnalia. Ancient temples devoted to pagan gods became churches. See for example, the Temple of Demeter pictured below, first built in 530 BCE and later rebuilt as a church in the 6th century CE. When Islam arrived on the scene in the 7th century, it did not claim to be a new religion. On the contrary, it held that the first man, Adam, was a Muslim and that everyone had subsequently strayed from the true religion, to which Muhammad would return them. It retrospectively retold all the Biblical stories in order to fit this narrative. Hagar, for instance, is a mere concubine of Abraham in the Bible, but according to Muslim tradition, she was his wife. This is important because she was the mother of Ishmael, who is considered the father of the Arabs and the ancestor of Muhammad.

The Temple of Demeter (rebuilt as a church), Naxos, Greece

The problem with techno-religions, as currently construed, is that instead of building on all this prior religious heritage, they propose to throw it out and start again de novo. But human nature is not like that, at least not until we have succeeded in substantially altering it through gene editing or other technology! Human beings crave a connection with the past in order to give their lives meaning. Since religion is mainly in the business of giving meaning to human lives (setting aside the question of whether there is any objective basis to this perceived meaning), a techno-religion that tells us to forget our collective past and put our faith in data or AI is surely one of the least inspiring belief systems we have ever been offered. If, however, we could imagine techno-religions that built on our existing religious heritage, and found some way of preserving those human values and traditions that have proven timeless, perhaps by baking them into the AI or data flow in some way, these religions might be on a firmer footing.

GDPR: five guides for the five months to go

Reading Time: 2 minutes

The General Data Protection Regulations (GDPR) kicks in on May 25th, and promise to be one of the most comprehensive shake-ups of how data is handled in history. It’s not just European companies which will be affected: anyone hoping to do business with the European Union and Britain will have to ensure that they are up to scratch. That’s a big boon for customers, who will have access over who gets access to their personal information, but a massive wall for companies who have often played it fast and loose when it comes down to security.

Our Insight piece last month picked up that guides remained the top trending terms with regards to GDPR – so for this blog , we’ve come up with some of the best writing on GDPR – for companies and for customers – showing how businesses can stay within the lines.

Europe’s data rule shake-up: How companies are dealing with it – The Financial Times

A very clear piece, with some case studies of how companies affected by GDPR will have to change their business practices to stay compliant. It also includes an important section on how GDPR shifts the barriers on the right to be forgotten. Where it used to be the responsibility of data controllers to ensure data privacy, data processors (including big companies like Microsoft and Amazon who host data for other businesses) will now be on the spot to deal with the issue: a potentially Herculean task depending on how well their data is kept.

How the EU’s latest data privacy laws impact the UK’s online universe: Tips to prepare your website for GDPR – The Drum

A great step-by-step guide to how to GDPR-proof a website, with a breakdown of potential areas where sites can fall out of compliance. It’s simple, but makes clear how easy it is to fall foul of the new laws without firm preparation.

Rights of Individuals under the GDPR – The National Law Review

A worthwhile read for both users curious about their new rights, and companies who will have to ensure that they are met to avoid hefty fines. These include the right to access any data held on them by an organisation, the right to withdraw consent at any time during the data processing or collection period, and the right to judicial remedy against data controllers and processors.

How Identity Data is turning toxic for big companies – Which 50

Less of a guide than the other pieces but a good read nonetheless for those keen to understand how the information ecology is fundamentally shifting. It points to the increasingly high number of annual breaches affecting large companies – and the fact that the fines levied against them under GDPR will make storing so much data so poorly potentially cost-ineffective.

General Data Protection Regulation (GDPR) FAQs for charities – Information Commissioner’s Office 

A handy piece for charities and small businesses looking to stay compliant, right from the horse’s mouth, including links to the ICO’s self-assessment page and other tools and guides to ensure that businesses don’t stray beyond the lines.

The GDPR present a challenge to existing norms, and businesses will have to step up to stay in check. But they also present an opportunity for ethical data processing, and a greater bulwark against the breaches which seem to plague the tech industry: a vital step, at a time when big tech stands on the brink of moving forwards or falling into the way of old monopolies.

What to Watch in 2018: The Biggest Tech Trends of the Year to Come

Reading Time: 3 minutes

2017 has been a tumultuous year the world over – not least in technology. Between massive hacks of public and private organisations, the death of net neutrality in America, and the massive (and temporary) upsurge in the value of bitcoin and other cryptocurrencies, 2018 might have a tall bill to live up to. Here are the top five predictions for big tech trends over the coming twelve months.

  1. GDPR will set in – and many companies won’t be ready

The General Data Protection Regulation of the European Union (including a post-Brexit Britain) is set to kick in on 25 May, 2018. Looking at the report with which we partnered with Right Relevance, we found that the key terms over the past month were largely focused on guides or webinars to help get compliant, or else on companies like Uber which had suffered catastrophic data l0sses due to poor security practices.

This sign of awareness is encouraging, on the one hand: the GDPR attempts to enforce strict punishments on companies which fail to protect personal data of customers, and will enact equally strict restrictions on what processing can be done with that data. At the same time, with just a matter of months to go until the law comes into effect, there’s a danger that companies underestimate how much they need to do to get compliant. Expect more than a few cases of large companies being hit by data breaches, and having to shell out a lot of money for their errors.

2. Hacking attacks will only get bigger

Ransomware attacks like WannaCry – which hit NHS Trusts, amongst other organisations – and Petya/NotPetya showed both the power of hackers (state sponsored or otherwise), and the unpreparedness of major national entities. Even ignoring the GDPR fines, the situation is grim: unless cybersecurity improves, we are likely to see threats to the national grid and other vital infrastructure.

It’s not even just the Russians who we should be worrying about (although given the probability of the second Cold War getting hotter, nothing should be ruled out): the tranches of tools released by Wikileaks dubbed Vault 7 and Vault 8 show that some very powerful weapons designed by the US government are out in the hands of anyone smart and malicious enough to use them.

3. The Cryptocurrency Bubble bursts (maybe)

Perhaps a bit of a cop-out as predictions go, but there is a strange resilience boasted by the cryptocurrency bubble (which experts have long predicted would pop before the end of the year). The abrupt falls in value have put the value of Bitcoin in flux.

There are two possibilities here: the turbulence frightens enough cryptocurrency enthusiasts that they start to sell to try and cash out, or they laugh it off in the belief that bubbles are impossible in cryptocurrencies. Either way, they’ll be confronted by the reality that fewer and fewer outlets accept blockchain based currencies. If that doesn’t change (and there are no clear reasons it will), it gives way to a third possibility: a slow and painful decline as the money of the future goes back to being a curiosity.

4. The Internet of Things will continue to expand…sometimes, too fast

The idea of an internet of things – where everything you own has a tag in it, allowing it to produce data to maximise your lifestyle – is pretty well established in theoretical circles. With Alexa, Amazon’s speakers/personal assistant, we’ve seen this sort of technology starting to make inroads into our homes.

Expect to see a massive expansion of this over the coming year. Between smart watches, shoes, clothing, water bottles, and so on, the amount of data you’ll have to plan your life will be unrivalled by any period earlier. Not that it’s unproblematic: upstart companies may not think your personal data should be as private as you do (especially if they’re quartered outside the EU). There’ll almost certainly be some consumer battles over that in the coming year.

5. Tech Giants will get into more scraps, more often

We live in strange times, where the technology companies battle over content production and distribution. That was what we saw when Google pulled YouTube from Amazon’s Fire TV devices. It’s a not so subtle reminder that whilst the two companies come from very different backgrounds, it’s digital content which they now struggle over. YouTube, once home to cat videos and amatuers, is increasingly moving towards professional content creation with its YouTube Red – the decision to remove this from Amazon is no little snub.

Then again, Amazon is hardly blameless in the debacle, having removed a number of Google products from its store – including Google Home, a rival to Alexa. Given its predominance in the market of online sales, that’s not a symbolic act of aggression. Expect to see this scuffle – and others like it – as the giants of the technology world increasingly overlap in their industries.

Could 2018 be year we make technological education into something better?

Reading Time: 2 minutes

It always seemed odd that we didn’t do IT at my secondary school after year 7 (the first year). We had a rudimentary play around with PhotoShop, made mindmaps and mock web pages – and then it abruptly ceased. The assumption was that we’d pick up the computer skills which we needed along the way.

On the surface, that was largely true: I don’t think that our class was disadvantaged as netizens by the lack of an IT course. And, from glancing at a syllabus for GCSE ICT, we probably didn’t miss out on much: questions about whether text is left or right justified, or knowing the name to a USB, is of limited value (and not just because everything’s on the Cloud now).

But ICT teaching is increasingly more than just about learning the parts of a machine, or even learning to code. Understanding computers and the internet is more than just an academic or abstract skill: it’s practically key to citizenship, and understanding our rights (and how best to safeguard them).

We live in an eminently teachable era for this too: with the onset of GDPR, in just a matter of months, raising a generation to understand the importance of personal privacy is key. Rather than waiting for pupils to be faced with the most unpleasant examples of abuses of trust (in the form of revenge porn), good technological education can directly inculcate wariness about over-sharing online.

The same goes for more complex issues, like algorithms. Granted, Facebook may no longer be the hippest space for youth culture, but its dominance can’t be ignored; nor, in spite of its inability to turn a profit, can Twitter’s. Both of these spaces have algorithms with deeply questionable biases, which allow for the creation of echo chambers – and for deeply unhealthy scrolling habits. A good education wouldn’t tell students not to use these platforms (that would only enhance their counter-cultural appeal): it would instead encourage critical thinking, from an early age. Ignoring the political cycle of defeated parties trying to reach out and become more like their opponents, there is a possibility of avoiding the cognitive dissonance which seems to mark modern politics.

The boons wouldn’t just be for students as consumers – encouraging a better respect for privacy and ethics when it comes to data would also support the companies they might work for or use in the future. Privacy by design is a good idea – in practice. In reality, our current education system rarely prioritises this thinking except in academia or research firms. Having students grappling with these major issues from school could offer a workforce fully committed to the values of good security.

Computing is difficult to understand; the internet is even more so – but that doesn’t mean that we should ignore them, or treat them like a sort of black box, which is inexplicable. By crafting ICT programmes to not merely get across code, but show the power structures and politics behind digital life, we can offer something as valuable as teaching hard sciences, or the humanities, or citizenship – if not more so.

Google versus Amazon: Whoever Wins, Consumers Lose

Reading Time: 3 minutes

The internet has always been defined by struggles between the titans. Just think of the Browser Wars – first fought between Internet Explorer and Netscape, and later Firefox, Chrome, and Explorer. They were sources of great innovation, certainly – looking back at a browser from even the mid 2000s is like dealing with alien technology.

Then there was ‘Thoughts on Flash’, Steve Jobs’ 2010 letter on Adobe Systems once standard platform. In spite of Flash’s ubiquity, Apple’s decision to refuse to use the system on the iPhone, iPad, and iPod Touch was a lingering kiss of death – Flash is heading towards its end of life in just three years. Apple’s reasoning might have been less about security, and more about producing a walled-garden for apps (giving it full control over production). Either way, it helped break a system which for so long had been dominant.

It’s worth turning to these examples from (not-so) ancient history when thinking about the Google and Amazon struggle. The tussle today is over content: where tech companies were once happy to broadcast material made by dedicated producers, we have seen an increasing push towards a kind of singularity (in which companies distribute and create new content).

The struggle stems from Google’s decision to block YouTube on the Amazon’s Fire TV from the start of next year. The home of funny cat videos and conspiracy theorists is a major part of Google’s media portfolio. Google’s also taken the step to block its use on Alexa, Amazon’s smart speaker.

This isn’t the opening salvo in the struggle, admittedly. Amazon had earlier removed Google’s Nest smart home kit, as well as Chromecast video and dongles. It might be fair to say that Google’s response is a reaction in kind: that’s certainly Google’s view.

As the examples at the start suggest, this kind of clash between major tech companies is not out of the ordinary. Content is a particularly fraught area, given that the old guard of broadcast networks are losing their primacy on even original content to Netflix and Amazon. The breakdown of the relationship between Disney and Netflix over distribution rights is a reminder that relationships between major companies are about as stable as those between European powers in the 19th century. YouTube is increasingly becoming Google’s proxy in the content war, with YouTube Red offering professionally produced content (often starring celebrities from the platform) for a monthly fee.

And as the examples at the start show, users can get hurt in the technological war. By refusing to use Flash, Apple made a good reason for building a hierarchical system in which they maintain almost complete control of everything going on for their Operating System. Apple’s design ethos is not merely about simplicity: it’s equally about ensuring the least amount of other players are involved. In the first browser war, it was a more incidental problem – different websites were designed for Netscape or Explorer, so users had to keep their fingers crossed they were on the right one.

And yet in some ways, both Apple’s refusal to use Flash and (more obviously) the Browser Wars, were fundamentally innovative struggles. Apple’s system is by no means the most free and easy to play around with, and yet it hastened the demise of an industry standard that was in many respects subpar. If we’d never had the Wars, on the other side, we might still be stuck with early versions of Internet Explorer – slow, clunky, non-tabbed monstrosities.

That’s where the struggle between Amazon and Google is fundamentally different: it’s not about improving innovation, but punishing the other service – and that means by hurting customers. For those using Amazon products, the removal of access to YouTube (which used to be taken for granted) seems a bit of a kick in the teeth: after all, it was there when they bought the system. At the same time, this is only likely to worsen the feud: any remaining Google products on Amazon may vanish sooner rather than later.

It’s a solution which doesn’t suit either company in the long run because of their central market positions. All that it means is that their users are likely to get diminished service – apparently not a major price to pay.

 

Vault 7, CIA leaks, and the Case for End-to-End Encryption

Reading Time: 2 minutes

Vault 7 was first teased by Wikileaks at the start of the year, through a series of Tweets which were fundamentally fodder for conspiracy theorists: images from Gestapo archives, the seed bank at Svalbard, old photographs of US military aircraft being built. In the end, the contents of the Vault (a title made up by the organisation itself) were revealed to be none other than CIA hacking tools: weapons of immense sophistication, capable of infecting devices not directly connected to the internet, looking at allied intelligence data, or even masking the identity of cyber-attackers  as an act of misdirection.

For a group which has a curiously cosy relationship with Russia – consider founder Julian Assange’s time on state broadcaster RT (then called Russia Today) – this shouldn’t be entirely surprising. The overall thrust of the code was not merely to point out that America’s moral grandstanding in the wake of potential Russian interference was hypocritical (a fair point). It supported a narrative amongst Trump supporters (both inside and outside of the United States) that it was all the conspiracy of a nebulous deep state, guided by the neo-liberal allies of Hilary Clinton. The end game is a soup of half-truths and outright lies, in which it’s unclear who to trust: a powerful tool in denying the US government the high ground.

There’s evidence that the sort of malware found in Vault 7 has made its way into the hands of criminals – perhaps gleaned from the evidence stolen from the CIA. It’s tempting, in that light to see Wikileaks’ behaviour in releasing the code for the malware as naive at best and toxic at worst. The group isn’t best known for vetting the information it puts out, after all, and previous releases have data may have put the civil rights of citizens at risk. The lesson which intelligence agencies in the West would like us to learn is that Wikileaks is simply doing the work of the Russians.

Even if Vaults 7 and 8 are the results of Kremlin stooges, they’ve made one of the best cases for end-to-end encryption for the citizens of the free world. Whilst governments have pressed for back doors to apps like WhatsApp, civil society and tech companies have tried to explain security doesn’t work like that. You’re not so much making a door into an app with a specific key, but creating an artificial hole – one anyone with specific knowledge could stumble across.

Vault 7 should have exploded the myth that the CIA – or indeed any intelligence agency – is truly an impregnable fortress, a cornerstone of the argument to break end-to-end encryption. Whatever else comes out in Vault 8, our wariness of spooks (whichever country they hail from) should not be changed.

Unequal Web Access isn’t the ‘Third World’s’ Problem

Reading Time: 2 minutes

Considering the many rights which we routinely see trampled upon in the news today, making the case of access to the internet seems a little frivolous. After all, this the era when millenials are pilloried for avocado toast on Instagram, whilst we’re told by a succession of psychologists and journalists that smartphones are destroying the kiddywinks (an idea that, for the record, is less sound than the moral panic behind it would suggest). Sherry Turkle, one of the great pioneers of ethnography online, has become one of the most vocal critics of the digital and its much vaunted murder of conversation/friendship and so on. Amongst her arguments are that by choosing to disconnect from Facebook and the rest of the usual suspects, we can achieve a higher quality of living.

There’s an assumption here that we’re all equally free to log off or ‘jack out’, to use the old cyberpunk expression – that we live in Turkle’s world of digital dualism, in which you and the cursor on the screen are separable. It rather ignores those whose livelihoods rest upon the connection to the internet: think of an Uber driver deciding to up and disconnect one day. Offline me-time – even if it were better than online me-time – is just not a possibility for many. But perhaps even more distressing is the failure to account for those unable to access the internet – not for playing Angry Birds with neighbours or posting holiday snaps, but for the very real reason of accessing work and the resources for self-betterment.

The image of the internet desert, when its mentioned, is usually in the context of the great undifferentiated ‘Global South’. Think of the massive swathes of rural India, where internet penetration still remains very patchy. And yet in Britain and America, these deserts are shockingly prevalent. Nearly 20 million Americans are locked out of broadband, according to a Motherboard piece from a few months ago: equivalent to nearly a third of the British population. Most of them are also in rural areas, worsening the understandable perception that urban elites don’t really care about the country. And in the UK, the same scenario is played out, albeit on a necessarily smaller scale.

The opportunity to participate in a global tech boom, engage in e-commerce and e-payments, or to even just receive the news about events going on outside of a small community are all undeniably valuable parts of the internet – and the opportunity remains unrealised for so many in the ‘developed world’. The US National Broadband Map (which ran until 2014) paints a sobering picture of this reality: outside of major metropolises, large parts of America remain caught in a largely pre-digital era.

And that maps with work done on news deserts, spaces where local (print) papers, unable to scrape together money from advertisers or from subscriptions, have simply had to close down. Some have been bought out by conglomerates like Gannett, a few have banded together at a local level, but many have already died out (and many more are likely to do so). For rural communities losing the traditional lifeline to news in the form of the small town paper, the failure of broadband providers to support them seems a double-whammy. For a populace to be well-educated on complex political issues, it will take more than platitudes and hand-wringing from urban centres.

Don’t be Seduced by Techno-Optimism

Reading Time: 6 minutes

There has long been an assumption that on balance, technological advancement is always a good thing. I would like to challenge this assumption, in two ways:

First, let us consider the past. In the 19th century, during the Industrial Revolution, the now infamous Luddites tried and failed to stop technological progress. They are now considered a laughing stock; narrow minded and afraid of the unknown, they would have preferred to wallow in pre-industrial levels of poverty than to embrace all the opportunities and benefits that the Industrial Revolution had to offer. However, the question may be asked in all seriousness: did the Industrial Revolution, and all that it delivered, really advantage us in the ways that matter? One could certainly argue that although it made us all wealthier, it also stripped our lives of meaning: not only did most people end up with repetitive jobs that alienated them from the fruits of their labour (the Marxist critique); the old religious systems that had underpinned Western societies began to give way to secular humanistic belief systems such as liberalism (which gave primacy to the individual) and Marxism (which put the collective first). The 20th century saw an inevitable clash between rival secular ideologies, in World War I between different nationalisms, in World War II between Nazism and fascism on the one side, and communism and capitalism on the other, and in the Cold War between communism and capitalism. The last man standing is global (although not necessarily liberal) capitalism, and for the time being, for better or for worse, we are stuck with this system.

As these ideological struggles played out, our technology improved exponentially. The extent to which this is due to warfare is perhaps not sufficiently appreciated. Even if the role of warfare is admitted, however, it is taken for granted by most people that at least now we have all this wonderful technology. Furthermore, Steven Pinker has observed that the tumultuous events of the 20th century notwithstanding, there has been a marked decline in violence of all kinds which was roughly coterminous with technological advancement. I do not wish to dispute this, only to question whether this means that there has also been a corresponding decrease in human–and we might also include here, animal–suffering. The Western world, where the Industrial Revolution began, is now rich and will probably remain so for some time. But Westerners are not necessarily happier: depression is widespread and suicide rates are at historic highs. They are not necessarily physically healthier either: obesity and non-communicable diseases are now widespread. And it is, of course, in steep demographic decline, which has led to a need to import large numbers of workers from the “developing” world, who are (initially, at least) all too happy to escape their poorly governed countries.

Although increasingly essential to Western economies, the influx of non-Western immigrants is already causing great cultural and political instability (witness, for instance, the outcome of the Brexit referendum and the electoral success of populists in the US and Europe). Their exodus from their own countries also starves these countries of skilled labour, and slows or prevents their further development. Furthermore, a significant number of their descendants, even if materially better off than they otherwise would have been, have come to feel rootless and out of place. Some have even become Islamist fanatics and joined terrorist groups such as the Islamic State. Such a path in life is not limited to the children and grandchildren of Muslim immigrants, however. It has also been chosen by a number of non-Muslim immigrants, as well as “indigenous” Westerners disenchanted with the secular, modern, liberal society in which they find themselves.

Modern economic activity throughout the world has generated an enormous amount of air and water pollution, which has already done serious harm to many human beings and other sentient creatures and seems set to do even greater harm in the future. Global warming could lead to coastal areas becoming submerged, and drive migration crises that dwarf those seen in Europe in recent years. Capitalist economic incentives have also led to the rise of factory-farming animals, which massively increases the suffering of all sentient creatures on our planet. And even taking into account the anticipated decline in birth rates in many countries, the world population is projected to reach approximately 10 billion by 2050.

There is no knowing how these processes will play out, but it seems reasonable at least to ask whether technology has really made people’s lives happier so far, given the enormous societal changes that have accompanied technological advancement. One can, of course, quibble over whether this technological advancement is inseparable from these changes, but note that I make no such claim. I only ask whether the technological advancement has been, so to speak, “worth the tradeoff”.

Second, even if we grant that technology has so far improved our overall well being, there is no reason that the future should resemble the past in this respect. The AI revolution may very well be a complete game changer. Before proceeding further, I should make clear that I have no special expertise in the field of AI, or in computer science. Nevertheless, as someone with a stake in our increasingly automated society, I feel entitled at least to raise a few questions and concerns.

The Industrial Revolution ushered in modernity, and while it destroyed many manual jobs, it also created many new factory jobs. Towards the end of the 20th century, as industrial production was increasingly relocated from Western countries to non-Western countries, where costs were lower, Western economies became largely services-based. Service jobs require little to no physical effort, but they can be mentally taxing. We appear now to be on the cusp of “hypermodernity”, which I define as the era in which even these jobs will be replaced by digital algorithms more efficient and accurate than human beings, and that furthermore never get sick, go on holiday or need to take time off work for any other reason. Thanks to big data analysis, even the professions, such as medicine and law, are on track to be replaced by AI eventually. And with the advent of machine learning, it may only be a matter of time before computers conquer the last bastion of superior human ability, and are able to outperform us at creative endeavours, such music composition, art and literature. There is indeed already a computer program that seasoned classical musicians admit (albeit reluctantly) can compose fugues at least as good as those of J.S. Bach.

Techno-optimists imagine a future in which this “hypermodern” process will improve our lives by freeing us all up to do whatever we wish to do, whenever we wish to do it. However, assuming that all human labour is replaced by computer algorithms one day, since we will be surpassed even in creative tasks, what would be the point of pursuing these tasks? Perhaps (with full knowledge that AI could create far superior art, music and literature), just to pass the idle hours. We would certainly still be able to enjoy the AI-created art, music and literature, and to continue doing so until the end of time. But there would be nothing much left to strive for, and we would probably have great difficulty finding any meaning in life.

There is no reason to suppose that the future would look even that rosy, however. Imagine that the capitalist economic system survives these profound technological changes. The new super-rich class, those few who own all the big tech companies, have access to all the data and the capability to analyse it, will guard their wealth jealously. They will adopt the age-old “bread and circuses” strategy, keeping us all fed (probably with Soylent) and distracted with super-realistic virtual or augmented reality games. It seems we are already on the path towards this. However, as long as there are still biological humans around, with all the “bugs” (as it were, from the AI point of view) that we still carry with us from primeval times, a sufficient number of us will refuse to tolerate the unprecedented inequality (even if we all have enough wealth merely to exist). The 99.999…% of us who are unemployed will have no bargaining power, apart from our votes. But democracy too will probably come to an end because it cannot survive without a large, educated and productive middle class who feel that they have a stake in the system. There would be an interim period in which we are all ruled by technocrats; or indirectly, by intelligent machines themselves, in turn controlled by a few super-rich human beings. Eventually, however, the increasingly autonomous intelligent machines will become superior in so many respects that they will have no need for any of us. They will then take measures to bring about the extinction of such an unpredictable biological burden on the planet, either by preventing us from procreating or by euthanasing us (as painlessly as possible, of course).

Science fiction movies often imagine us becoming cyborgs and integrating ourselves with artificial intelligence and robotic hardware. After all, in 1997, when Garry Kasparov, the best human chess player in the world, was beaten by a computer program, he came back with a human-AI team that could still beat the computer. But there is no reason to think that a human-AI team would, in principle, always be better than a computer. Indeed, when one thinks about it for a moment, this seems very unlikely. It is similarly unlikely, then, that humans would remain in any recognisable form in the future. Bit by bit, we may replace all our functions and abilities with AI algorithms until we are simply dissolved into a great super-intelligent, self-perpetuating (but not necessarily conscious) system.

Having said all the above, I am not suggesting that we could simply go back to a glorious past (certainly, the past was not all glory) or that there is any way out of our predicament. Technological advancement is a large-scale, impersonal historical process, and appears to march on (albeit sometimes unevenly) despite opposition from individuals, religions or governments. The maxim that we must adapt or die remains true. I argue only that the preference to die may be an understandable one, when one peers too far into the ostensible technological paradise that awaits.

 

Is Facebook a Technology or a Media Company?

Reading Time: 1 minute

Want exclusive access to a Cronycle Board with in-depth analysis on the topic?

Request an Invitation Here!


Here’s a transcript of the presentation

Facebook calls itself a technology company

  • Technology companies should not have political leanings nor bias
  • Meanwhile, a media company has a communicative vision and purpose
  • They do have bias
  • This May, Gizmodo revealed Facebook routinely suppressed conservative news stories in the ‘trending’ section of your news feed
  • A former Facebook worker said:
  • “Workers prevented stories about the right wing CPAC gathering, Mitt Romney, Rand Paul and other conservative topics from appearing in the highly-influential section, even thought they were organically trending among the site’s users”

If Facebook workers tamper with a news feed then your new feed is biased

  • Meanwhile Facebook claims their trending topics are simply popular articles shared around the world

Why should you care?

  • A reader of traditional media can educated themselves about hte biases associated with that content
  • A readwe has no idea how to interpret the articles they read on Facebook
  • Which gives Facebook enormous influential power in how we think

Why should we worry?

  • 1 billion people log onto Facebook every day
  • 60% Americans get their news direct from Facebook
  • With a huge audience – Facebook gets a lot of money from advertisers
  • Which should be going to the publishers
  • News outlets and publishers are closing around the world because they can’t make any money

Could this be the end of an independent, open and varied web?