Cryptocurrencies and the True Source of Value

Reading Time: 5 minutes

One of the arguments against Bitcoin and cryptocurrencies in general is that they do not represent true value. Behind the crypto-algorithms, according to this line of argument, is really nothing that could objectively be considered currency; indeed, nothing at all. Hence, cryptocurrencies are a bubble which is bound to burst. This is not just an any-man-on-the-street opinion; it has been espoused by the billionaire investor Howard Marks, who predicted the “dotcom bubble” of the 1990s. “In my view, digital currencies are nothing but an unfounded fad (or perhaps even a pyramid scheme), based on a willingness to ascribe value to something that has little or none beyond what people will pay for it,” Marks said in 2017. Marks used historical precedent to underscore this point, pointing to the notorious “tulip mania” that started in the Netherlands in the 17th century. In 1637, at the height of the mania, a single tulip bulb could be worth up to ten times the annual income of a skilled craftsman.

Lydian coin. Inscription reads “I am the sign of Phanes”. Electrum (alloy of gold and silver), length: 2,3 cm. Late 7th century BCE, found at Ephesus. Israel Museum, Jerusalem.

One might wish to consider other historical precedents, however. Currencies per se are a surprisingly recent invention in human history. According to the archaeological record, the first coins were used in Lydia (present day Turkey) in the 7th century BCE (see image above). This is long after the rise of cities and kingdoms and indeed the successful smelting of metals, including gold, silver and bronze; even 500 years after the commencement of the Iron Age in the Middle East. We also know that this was not due to lack of technical engraving ability, since many small metal seals with intricate designs have been found dating from many centuries prior to the 7th century Lydian coins (see image below).

Seal of Tarkummuwa, King of Mera. Silver (diameter: 4.2 cm). c. 1400 BCE, found at Smyrna. Walters Art Gallery, Baltimore.

The anthropologist David Graeber has provided an interesting explanation of why coinage was eventually developed. Coins were not initially used by most ordinary people, he argues. The available archaeological evidence shows that the first coins were used by soldiers. This makes sense, Graeber argues, when we consider that ancient rulers had to find a reliable way of feeding armies at the frontier of their empires. If the soldiers were stationed inland, he points out, it would be extremely difficult to move large amounts of grain or other foodstuffs with them. If, however, standardised coins could be minted and given to soldiers, the soldiers would be able to buy the necessary food from the ruler’s civilian subjects in these far-flung parts of the empire. By taxing his subjects, these metallic tokens of value would then be returned to the king. They began as a more efficient way of feeding armies, but once they acquired universally recognised value within the state, could be applied to any economic transaction.

In order to be hard to forge, coins had to be minted out of rare metals by skilled craftsmen. But even gold, silver and copper, which were used for the earliest coins, have no intrinsic value, as Israeli historian Yuval Noah Harari points out – “you can’t eat it, or fashion tools or weapons out of it.” The lesson here is that no form of currency has value above and beyond what we ascribe to it, collectively, as human beings. Thus, it will not do to dismiss a cryptocurrency, as Marks does, because it has no value beyond what people will pay for it (this is not to say, of course, that other arguments against cryptocurrencies fail; only that this particular line of argument is unconvincing). One might well imagine an ancient Lydian exclaiming, “These bits of metal with their fancy designs and inscriptions have no real value. The whole fraud will surely collapse after the king dies.” And yet, as we now know, it did not turn out that way. The coins had value because enough people came to believe that they did and that was all that mattered.

We have since, although only relatively recently in 1971, abandoned the gold standard, making way for the the US dollar as the world’s reserve currency. One could even argue, as some have, that the dollar is a less reliable store of value than either gold or Bitcoin, because the US Federal Reserve can simply print as many units as it sees fit – and indeed, in the last round of quantitative easing since the 2008 crash, it has been printing an unprecedented number. The amount of gold in the world runs up against physical limitations, whereas the amount of Bitcoin runs up against mathematical ones. While it is true that other cryptocurrencies can avoid the same limitations that appear to be built into Bitcoin, matters such as the total number of units to be issued and the value of each unit relative to everything else still depend on the vital criterion of consensus by the community of users. Notice that the technological aspects aside, this criterion also applied to the very first currencies used by our species. While it is true that the first coins were issued by rulers in a top-down fashion, these rulers did not realise that they had brought into being a monetary system that would soon escape their control. As Graeber also notes, after appearing in Lydia, coinage soon emerged independently in differently parts of the world. This meant that when different empires came into contact with each other, they had to arrive at a fair exchange rate. If the empires were of roughly equal power, this could not be determined by either of their rulers and was determined instead by market factors beyond any one individual’s control. Exchange rates between different official currencies have thus continued to fluctuate from ancient until modern times.

Bitcoin and other cryptocurrencies could indeed be seen as the next logical step: prior to their emergence, the only “non-physical” medium of exchange resembling a truly global currency was the IMF’s “Special Drawing Rights” or SDRs, although as their name suggests these have only been issued and used in exceptional circumstances. Better yet, unlike SDRs, cryptocurrencies are not controlled centrally in any way. Instead, they are designed to bypass both governments and banks. All they require is a public ledger, the blockchain, to keep track of all transactional information. Governments and banks understandably find this frustrating and will likely do all they can to bring cryptocurrencies under their control. In this respect, however, they may resemble a Lydian king who tries to fix the prices of various commodities, only to find his attempts frustrated by his subjects, who find roundabout ways to buy or sell commodities at market prices.

The fact of the matter is that we are now all living in a global economy, and cryptocurrencies have beaten the IMF to the finish line of establishing imaginary units of value that are created (or “mined”), recognised and used globally. One or even all of them may collapse eventually, but the point is that such an event cannot be brought about by governments or banks. The technology is now out there, as is the will to avoid the fiats of governments or banks. And if they do collapse irreversibly, that is not necessarily good news for fiat currencies. The need for an independent global currency will likely persist even in their absence, perhaps leading to a return to something like the gold standard. In any event, when we go back to the very root of currencies and what makes them valuable, we may well discover a counter-intuitive (at least, to some) truth: that both gold and cryptocurrencies are better placed as stores of value than fiat currencies, such as the pound, dollar or euro.

There is No Solution to the Problem of “Fake News”

Reading Time: 4 minutes

In the aftermath of the 2016 election, the term “fake news”, seldom heard previously, became ubiquitous. This was, of course, no coincidence: the unexpected victory of Donald Trump cried out for an explanation, and invoking the concept was one such attempt by the president’s many critics, who could not bring themselves to face the possibility that he won fairly. As one conservative commentator saw it, “just as progressive ideas were being rejected by voters across the western world, the media suddenly discovered a glitch which explained why. Fake news is the new false consciousness.” But the dissemination of disinformation and propaganda is as old civilization itself. The internet is merely a new means of spreading these, and even then, not especially new. Consider, for instance, the anti-vaccination and “9/11 truth” movements of the preceding decades, and the role played by the internet in amplifying the noises of otherwise small groups of dedicated ideologues or charlatans. So we are still left wondering: why only in the last few years has the term “fake news” entered public discourse?

A possible answer is that the point has been reached at which traditional purveyors of news feel that they no longer have control over broader narratives. Their sounding of the alarm over “fake news” is thus a desperate rallying cry in order to regain this control. Some have drawn an analogy to the invention of the printing press in the 16th century, which also revolutionized the spread of information and led to the Protestant Reformation (and of course, disinformation, such as exaggerated accounts of the horrors of the Spanish Inquisition). From this perspective, it is futile to resist the changing ways in which information spreads. One must adapt or die. In many ways, Donald Trump, who began his presidency fighting off a cascade of “fake news” allegations, including about such petty matters as the size of his inauguration crowd, has done a better job of adapting to the new informational eco-system. Twitter, with its 280–until recently, only 140–character limit, has turned out to be the perfect medium for a president with a reportedly short attention span. He also uses it to bypass the mainstream media in order to reach the public directly with his own message or narrative. And the president has masterfully turned the weapon of “fake news” around, aiming it right back at the media. At the end of 2017, his first year in office, he seemed to relish releasing the “The Highly Anticipated Fake News Awards”, a list of misleading or false anti-Trump news stories undermining the media’s insistence that it is impartial.

For all its faults, however, the mainstream media does have a legitimate point about the dangers of “fake news”. There must be an objective standard against which all purveyors of news are held and there does need to be a common set–or at least core–of facts upon which all rational parties in society can agree. But this is easier said than done, and it is far from obvious that there is a “quick fix” solution to this problem that does not merely favor one set of news purveyors over another, based on criteria other than factual accuracy. For example, many in the US fear that the Federal Communications Commission’s (FCC) proposed changes to “net neutrality” rules will give a few major companies the ability to speed up, slow down or even block access to certain web addresses or content. Comcast, for instance, is simultaneously the largest television broadcasting company, through its National Broadcasting Company (NBC) channel, and the largest internet service provider in the United States. Should the current FCC chairman’s plans to end “net neutrality” succeed, this will put Comcast in a powerful position to regulate–effectively–much of the online media landscape according to its own financial interests as a news organisation.

Social media companies such as Facebook have come under fire for spreading “fake news.” Although Mark Zuckerberg initially argued that Facebook is a tech platform and not a media company per se, he was eventually forced to concede that whatever he had originally intended the company to be, an increasing number of people around the world did in fact get their news primarily from their Facebook newsfeed and that Facebook therefore had a “a responsibility to create an informed community and help build common understanding”. Behind this corporate newspeak must also lie a very real fear that government regulation of Facebook as a media company could end up crippling its business model. If Facebook could be held liable for the spread of false information, it would need to hire thousands of fact checkers to nip this in the bud whenever it occurs, but doing so would be far too costly for the organisation, to say nothing of the practical challenges involved. Thus, it has had to rely on very imperfect “fake news” detection algorithms, and more recently, a deliberate de-emphasis of news altogether, the idea behind this being to return the platform to its original purpose of connecting friends and family.

But it is gradually dawning on many people that the war on “fake news” may be unwinnable. This is because there is no in-principle solution to the age-old philosophical problem of how to know what is true. If anything, this problem has become vastly more difficult now that there is an abundance of information to sort through, presented to us in a non-random–but not necessarily truth-tracking–way. We would all do well, however, to exercise greater skepticism in response to all truth claims, including official ones, such as the vague claim that Russia “hacked the election”. Skepticism does not come naturally to human beings, who are notoriously credulous. One should thus be taught to be skeptical from a young age, and to favor logical consistency and empirical evidence over other considerations when evaluating competing truth claims. This approach falls well short of a real solution, but it may help us individually and collectively to navigate the treacherous ocean of information in which we find ourselves. Hopefully, we will find ways of adjusting to our current information environment and a new equilibrium will emerge from the informational chaos. Cronycle is one platform that is ahead of the curve in this respect: it not only recognizes the problem of information overload, but provides its users with useful tools for finding the trustworthy, high quality content out there in the Wild, Wild Web.

Does the Future of Religion Lie in Technology?

Reading Time: 7 minutes

Very little these days seems untouched by technology. Indeed, people’s lives are so saturated with it that they sometimes speak of “withdrawal symptoms” on those increasingly rare occasions when they find themselves without internet access. Some try to escape it at least some of the time, for instance, by “disconnecting” for a day or more when on holiday or on a retreat. Yet surely, one might think, religion is one area that remains largely untouched by technology. This is certainly true of the Amish or ultra-Orthodox Jews, who are outright suspicious of any new technology. But it is true of mainstream religion as well. The eternal “flame” in synagogues is now often electric, churches have replaced candles on their Christmas trees with electric lights, and the muezzin’s call to prayer is often amplified by a loudspeaker. These changes, however, are trivial. Ancient religions have shown themselves able to incorporate technology into their practices, without disappearing or changing beyond recognition. It seems, then, that technology does not directly threaten religion.

Nevertheless, throughout most of the Western world, the churches are empty. Declining church attendance certainly seems to be correlated with technological advancement, but is there a causal connection? Perhaps the further factor causing both is the triumph of the scientific worldview. This laid the ground for the discoveries of biological evolution and Biblical criticism–which pulled the carpet from under religion–as well as for rapid technological advancement. What makes Western societies different from non-Western ones is that the former experienced both of these processes simultaneously. The latter, by and large, experienced only the latter. It is possible, however, that in the coming decades, the whole world will secularise as all societies move toward the scientific worldview. The question then is whether religion dies out and mankind continues without it, or whether one or many new religions are born into the vacuum that will be left.

Already there are some indications of what these religions might look like. The Way of the Future (WotF) “church” was founded in 2015 by self-driving car engineer Anthony Levandowski, on the “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” Although this doctrine sounds somewhat comical and far-fetched, when one unpacks it a little, it begins to make sense. Through all the data and computing power to which it had access, Levandowski’s AI would be, for human intents and purposes, omniscient and omnipotent. We would believe in its ability to answer all our questions and solve all our problems and “worship” it by doing whatever it required of us in order to perform these tasks better, including handing over all our personal data. We would also do well to show proper respect and reverence towards this AI-godhead, so that it might in turn have mercy upon us. “There are many ways people think of God, and thousands of flavors of Christianity, Judaism, Islam,” says Levandowski, “but they’re always looking at something that’s not measurable or you can’t really see or control. This time it’s different. This time you will be able to talk to God, literally, and know that it’s listening.”

The Israeli historian Yuval Noah Harari, in his book Homo Deus: A Brief History of Tomorrow, distinguishes between two main types of “techno-religions”: “techno-humanism” and “data religion” (or “dataism”). Although he uses the term “religion” more broadly than most other writers would (e.g. he considers capitalism a religion), his discussion is helpful here. WotF would fit into the former category, since it posits that humans would continue to exist and to benefit from AI. “Dataism”, however, as Harari puts it, holds that “humans have completed their cosmic task, and they should now pass the torch on to entirely new kinds of entities.” (Harari, p. 285) This is more in line with what has been called the singularity, the point at which humans either merge entirely with AI or are eliminated by it – perhaps due to their inefficiency. It is of course entirely possible that techno-humanism is only a stepping stone on the way to the singularity, in which case it is indistinguishable from data religion, but Harari leaves this possibility open. Levandowski, too, dislikes the term “singularity”, preferring to speak of a profound “transition”, the end point of which is not at all clear.

A central tenet of “dataism” is that data processing is the highest good, which means that it evaluates everything in terms of its contribution to data processing. As a further corollary, anything that impedes the flow of data is evil (Harari, p. 298). With this in mind, Harari proposes Aaron Swartz as the first martyr of “dataism”. Swartz made it his life’s mission to remove all barriers to the free flow of information. To this end, he downloaded hundreds of thousands of articles from JSTOR, intending to release these so that everyone could access them free of charge. He was consequently prosecuted and when he realised that he was facing imprisonment, committed suicide. Swartz’ “martyrdom”, however, moved his goal a step closer, when JSTOR, in response to petitions and threats from his fans and “co-religionists”, apologised for its role in his death and agreed to allow free access to much of its data (Harari, p. 310).

These ideas about future techno-religions are all interesting, but they seem to miss at least one key feature of religion, namely its continuity with the past. Much of religious belief and practice is concerned with events that occurred in the past and are re-enacted through common rituals. Nicholas Wade, in his book The Faith Instinct, argues that religion has evolved gradually throughout human history (and pre-history). According to his thesis, religion “evolved for a single reason: to further the survival of human societies. Those who administer religions should not assume they cannot be altered. To the contrary, religions are Durkheimian structures, eminently adjustable to a society’s needs.” (Wade, p. 226)

He observes that every major social or economic revolution has been accompanied by a religious one. When humans were in their most primitive state, that of hunter gatherers, they were animists who believed that every physical thing had an associated spirit. Their rituals included dancing around campfires and taking hallucinogenic drugs in order to access the spirit world. With the agricultural revolution, humans developed calendars and religious feasts associated with the seasons. They came to worship a smaller number of “super-spirits”, or gods, often associated with agriculture, for example Demeter the Greek goddess of the harvest. The next phase of this revolution was increasing urbanisation, which began in the Middle East. As cities gave rise to states, and states to empires, the nature of religion changed again. It needed to be organised in a unified and centralised manner, and as the Roman emperors eventually discovered, Christianity was more conducive to these requirements than paganism (in the Far East, Buddhism, and in the Near East, Islam, fulfilled much the same function). The Protestant Reformation happened at approximately the same time as the voyages of discovery and the expansion of European empires around the world. This new form of Christianity placed greater emphasis on the individual, and so ushered in capitalist free enterprise. The Industrial Revolution then followed, which was the last major revolution until the present Information Revolution, as it might be called. Yet in the approximately 200 years since then, no new (or rather, updated) religious system has yet emerged. As Wade suggests at the end of his book.

Maybe religion needs to undergo a second transformation, similar in scope to the transition from hunter gatherer religion to that of settled societies. In this new configuration, religion would retain all its old powers of binding people together for a common purpose, whether for morality or defense. It would touch all the senses and lift the mind. It would transcend self. And it would find a way to be equally true to emotion and to reason, to our need to belong to one another and to what has been learned of the human condition through rational inquiry. (Wade, p. 227)

One might wonder whether techno-religions would be up to the task. Notice, however, that all the previous religious transformations were gradual – so gradual, in fact, that many people who lived through them may not even have noticed them. We still see evidence of this in the many pagan traditions that were incorporated into Christianity, for example, the Christmas tree, which probably derives from the ancient Roman practice of decorating houses with evergreen plants during the winter solstice festival of Saturnalia. Ancient temples devoted to pagan gods became churches. See for example, the Temple of Demeter pictured below, first built in 530 BCE and later rebuilt as a church in the 6th century CE. When Islam arrived on the scene in the 7th century, it did not claim to be a new religion. On the contrary, it held that the first man, Adam, was a Muslim and that everyone had subsequently strayed from the true religion, to which Muhammad would return them. It retrospectively retold all the Biblical stories in order to fit this narrative. Hagar, for instance, is a mere concubine of Abraham in the Bible, but according to Muslim tradition, she was his wife. This is important because she was the mother of Ishmael, who is considered the father of the Arabs and the ancestor of Muhammad.

The Temple of Demeter (rebuilt as a church), Naxos, Greece

The problem with techno-religions, as currently construed, is that instead of building on all this prior religious heritage, they propose to throw it out and start again de novo. But human nature is not like that, at least not until we have succeeded in substantially altering it through gene editing or other technology! Human beings crave a connection with the past in order to give their lives meaning. Since religion is mainly in the business of giving meaning to human lives (setting aside the question of whether there is any objective basis to this perceived meaning), a techno-religion that tells us to forget our collective past and put our faith in data or AI is surely one of the least inspiring belief systems we have ever been offered. If, however, we could imagine techno-religions that built on our existing religious heritage, and found some way of preserving those human values and traditions that have proven timeless, perhaps by baking them into the AI or data flow in some way, these religions might be on a firmer footing.