How Will Mobile Apps Make Business Easier

Reading Time: 4 minutes

The field of mobile apps is growing as we speak, mostly in two different ways. One way is the app development market that employs millions of people globally. The other way mobile apps affect the modern business realm is their use and application (sic!) for various business purposes. As new technologies are being developed, they’re also finding their way to mobile implementation. So, let’s read a bit more about the further potential of mobile apps in the world of AI, AR and other tech innovations.

 

1) Artificial intelligence and mobile apps

Smartphones, tablets, and wearables already function via smart software tools that can learn some of the patterns established by their users. However, this is only the tip of an iceberg. In other words, AI will change the market of mobile apps.

For starters, our living and work habits will be memorized and gathered by the means of AI. In turn, this will enable their creators and mobile providers to prepare various ready-made offers for our daily routines. Since many of our decisions and actions will be anticipated, all these tech innovations should lead to a more productive work day and well-organized free time.

Apart from that, app developers will have a chance to use these AI-collected data to do QA testing. As a result, they’ll save more time that can be invested in solving complex UX and functionality problems. AI features will be here to do the tiring coding tasks and analyze the UX-input gathered from customers.

Nevertheless, all these innovations could have negative effects on our privacy if our private data aren’t collected and stored in accordance with the legal guidelines. That’s why every app developer will need to take into consideration the GDPR act. Still, if you follow these rules, you’ll benefit from these tech innovations, including the AI-features.

 

2) Augmented reality in business apps

The growth of eCommerce industry has taken the global retail market by storm. Most renowned vendors already use apps, in addition to their business websites, to make their products available for shopping on the go.

Things are moving even faster today, especially with the introduction of augmented reality in eCommerce.

The greatest benefit of AR in this context is the ability to use its features to make shopping even simpler and less expensive for customers. For instance, Amazon has introduced AR-features in their app. What you can do here is simply project the item you’d like to buy on this website in the space you’d like to place it.

Similarly, car dealers are also taking the plunge into AR in their everyday work. Car buyers don’t have to go round countless car shops these days. They can simply use AR-features via dealers’ mobile apps to try new vehicles. Read more about these AR-trends in the article on The Drum website.

The downside of AR is that it’s still expensive for many SMBs. However, this will change sooner than we might expect, which will enhance the productivity of smaller business enterprises.

 

3) Accounting benefits of mobile apps

Small business owners often have issues with accounting demands. From their in-house books to bank accounts, to tax returns, more often than not they omit to process some data. These mistakes can result in inaccurate accounting data and financial penalties from the tax authorities.

The good news is that there are literally thousands of accounting mobile apps that will make your business life easier.

Still, this large number of apps calls for caution. Naturally, the best way to avoid any risks in this field is to use the mainstream apps, such as QuickBooks or FreshBooks. Both these tools have top-notch apps, plus they also have distinguished cloud features which makes them perfect for new business owners.

Apart from that, you can use modern accounting tools on your phone to simplify the payment procedure. In line with that, it’s wise to keep an online invoice maker at the touch of a finger. Every time you need to cope with larger orders or payments, you can issue an invoice in no time and speed up the purchase.

 

4) Increased productivity with mobile apps

Mobile apps have already improved our work productivity. Take only the accounting apps described in the previous paragraph. If you can use them on your mobile on the go, they enable you to deal with the business paperwork when commuting home from work or when you’re waiting in line in a supermarket.

Moreover, mobile apps enable SMB-owners and their workers to constantly communicate about their projects. What’s more, many project management tools come with mobile apps, as well. So, you have all-in-one solutions for work organization, time management and data share. Now imagine how advanced all these tools will get when AI, AR and other cutting-edge tech features become fully implemented in them.

Also, using mobile apps in various business ventures enables their owners and employees to collaborate remotely. This option opens an immense number of possibilities for employment, cooperation and better connectivity in terms of business productivity and operability. In the future, these features will lead to further improvements when it comes to work conditions and efficiency.

 

Conclusion

The number of mobile users is already counted in billions. The advancements in the production of smart devices and apps will lead to further growth in these figures. The improvements of mobile apps develop simultaneously with the number of mobile users. The combo of these two trends will produce a more engaging and inspiring work environment in the future, which will yield benefits for business owners, their employees, and, finally, the users of their services. That’s why we should all be looking forward to the app-enhanced business future.

 

This blog post was written by our guest,  Mark who is a biz-dev hero at Invoicebus which you can also follow on Twitter

How AI Can Improve Customer Engagement

Reading Time: 3 minutes

How AI Can Improve Customer Engagement

Success in business can be measured in happy customers. Aside from your product or service offerings, showing consumers you appreciate them, hear them, and are interested in providing them with the best customer service possible, will not only help you retain current customers but also gain new ones. This, in short, is called good “customer engagement”. Various methods and tools have become available in recent years to best optimize customer engagement. The most recent game changer has come in the form of artificial intelligence. Because AI can provide tremendous benefits with minimal effort, businesses of all sizes, in various fields, are taking full advantage of what it has to offer. In fact, according to a Pegasystems survey on customer engagement, one hundred percent of top-performing companies are currently using AI in some fashion. For those interested in following that lead, we’re outlining ways in which AI can improve customer engagement.

Though the general public may not be entirely aware of it, use of AI is already incredibly common. While only thirty-four percent of people think they use AI, the reality is that eighty-four percent of us are interacting with some form of artificial intelligence on a daily basis. This statistic is good news when it comes to placating consumers who fear or negatively view the use of AI in business. Considering the technology can hugely benefit a consumer’s experience, education is key. From better product offers to faster response times and more relevant messaging, AI’s power to anticipate and meet customer needs is a win for us all.

Properly engaging with your customers begins with understanding them. Knowing the answer to a few simple questions such as: who they are, what they want, what they can afford, their pain-points, and what platforms they communicate on; gives you a running start to improved connections. Through machine learning and AI technology companies are capable of collecting and analysing enormous amounts of data that can provide the answers to these questions to create better customer experiences with each interaction.

An increasingly popular tool for bettering customer engagement is the chatbot. These virtual assistants go hand-in-hand with customer service as more and more companies recognize their value and begin implementing them.

Rapid Response

Unlike human customer service representatives, chatbots can work 24/7 and are capable of handling a high volume of requests without the need to spend time searching for answers. This helps reduce service time up to fivefold, improving customer support, and reducing operating costs by as much as sixty six percent.

Proactive Interaction

Typically, companies engage with customers passively, responding to inquiries rather than starting them. Chatbots reopen the gates of communication by beginning conversations on their own, and share useful information with customers. Things like new product offers, blog entries and so on. Over time, this leads to greater personalization as Chatbots take in personal information on a customer, and offer them more targeted suggestions.

Another aspect of improved customer engagement is hyper-personalization. Consumers today want to feel connected to the brands they buy from and you can meet that request by leveraging AI. Capturing data on prospects is nothing new for businesses, however with AI and machine learning marketers can analyze current and historical facts to perfectly structure the most relevant message to each individual. Knowing what customers are thinking and saying about your brand creates opportunities for engaging those consumers on the topics they’re interested in, while communicating through the platform they prefer.

Particularly with a younger audience, positive customer engagement is essential. Well-educated on technology, the younger generations know what businesses are capable of, and because of that, they expect authentic, meaningful, and responsive interactions. AI can help you meet these needs effectively and efficiently resolving complaints and inquiries 24-hours a day.

 

The guest post was written by Sara a.k.a. Digital Diva, Co-founder of Enlightened Digital who you can follow as well on Twitter.

The Death of Klout is not the end for Influencer Models

Reading Time: 3 minutes

Amongst the General Data Protection Regulation’s (GDPR) first casualties, Klout stands out for the lack of sorrow at its demise. The premise was simple enough: distilling users presence across multiple social media platforms to give a single score. The (almost always) two digit number bears an eerie resemblance to the rather vague and sensationalist descriptions of China’s social credit scheme, Sesame Credit – albeit several years ahead of the Communist Party’s alleged plans.

The premise was flawed for several reasons. For one, there were concerns about the ethics of an opaque system for measuring social media influence, not least one boiling users’ influence down to a couple of numbers. Secondly, and perhaps more pressingly, Klout’s model was (for want of a better word), useless. Rather than showing anything meaningful about social media influencers, it did little more than aggregate scores (often woefully poorly). When a social media score did little more than  Even worse were its descriptions of influencers areas of specialism: as The Drum pointed out, Klout’s view of Pope Francis portrayed him as both an expert theologian and a leader on Marxism, warfare, and Miss Universe. Such profiles did not fill the world of marketing and PR with great hope for Klout, which is winding down on May 25th (the same day as GDPR).

Whilst the regulations undoubtedly played a role in the downfall of Klout (a service which almost certainly didn’t play by regulations in terms of data collection and processing), its failure to make a meaningful service was almost certainly at its core. That’s not to say that studying influencers is worthless for marketers, journalists, and communication professionals – just that smarter ways of studying influence are necessary.

One of these comes from Cronycle’s service. In addition to using Twitter data and network analysis to produce our Insight Reports. Cronycle keeps tabs on influencers across multiple topics through our Right Relevance platform across dozens of topics. Rather than giving users a single score, they receive scores for individual topics and sub-topics – this more granular approach is more valuable since it allows users to narrow down on the specific expert or influencer they want. It also builds up links with related influencers, creating networks which reflect underlying similarities and ties.

An image of top influencers on the topic of GDPR. The sliders on the right allow for users to narrow down on the group they are particularly interested in.

The service extends beyond Klout’s focus on numbers, though. At the broader end of the scale, Croncyle’s service gives a dashboard allowing you to search through topics, compare trending hashtags, look at the top influencers and domains, and see related topics.

The Cronycle Influencer and Topic dashboard for AI

Cronycle users can also search through articles by top influencers on their areas of speciality (as well as through related topics), giving both the tweets by the influencers and their articles. Domain searches are another feature, giving a list of top topics and top influencers for specific sites.

The final aspect is Topic Intel, which allows users to compare a single subject across time – an equally important comparison to that between different subjects.

Topic intel for AI and Machine Learning

Users can easily find how the top spots have changed – or not – for their subjects, as sorted by retweets or mentions (all Twitter activity).

Klout may be dying, but the influencer model is by no means moribund. Holistic approaches, like Cronycle’s, build on Klout’s work of showing influence through a numeric system but seriously ramping up the extra information required to make that useful.

How App Developers Are Planning To Employ AI To Enhance Mobile App Development

Reading Time: 4 minutes

Despite being regarded as an emerging technology, it is quite interesting to see how artificial intelligence (AI) is influencing mobile application development, impacting the way people communicate, affecting the society, and probably changing the world. Obviously, the growth of artificial intelligence is already causing a transformative change in the application development space. No doubt, if everything that this emerging technology development is believed to accomplish turns out to become a reality, then it will surely have a huge impact on human lives.

It’s no secret that despite the giant strides achieved in the mobile industry with app development, Indian app developers are still having a hard time meeting internal demands for building applications. In a bid to effectively streamline programming and meet business needs and demands, several application development teams in India are already augmenting their efforts with co-developers of AI to not only enhance growth and development in the industry but to also ensure effective data cleansing and organization, agile product management, and quality assurance.

As part of efforts to enable Indian app developers to focus on design and development tasks that are more closely related to users’ needs, artificial intelligence co-developers are stepping in to handle low-level routine tasks like infrastructure and other peripheral tasks. However, there is every tendency for this new emerging technology to take on much higher-level work in the nearest future. Here are some common areas of app development in which artificial intelligence (AI) would flourish in years to come.

 

Application development

Just so you know, many Indian app developers are already getting involved with the use of AI to enhance application development. Basically, this new initiative is helping to transform the way and manner most app development companies run their development processes. Though they’ve not gotten there yet, it is obvious that mobile application developers are on the right path to employing AI to automate quality assurance (QA).

This implies that in the nearest future, apps will be able to run tests on themselves, identify bugs and get them fixed with very little direct input from users. Some Indian app developers opine that this new technology will allow apps to be able to modify and run updates that can better suit any changes or updates regularly performed by an operating system (OS). Basically, this will help to cut down costs significantly as such self-optimizing apps will be able to transform themselves to function efficiently with the firmware updates of any mobile device.

 

User Experience (UX)

It may interest you to know that for several years now, artificial intelligence (AI) have been available. However, the technology had not attained the level of impacting lives directly until now. Nevertheless, there is still much to achieve with AI as this is not exactly how it is expected to function. Until it gets to the true next-gen version there is still very much for developers to achieve.

Apart from impacting the development of apps, AI is also affecting user experience. Though at the moment, achieving this is still not close possible, however, there is so much developers can do with AI particularly when it comes transforming user experience (UX). Before now, tech devices such as PCs were designed to work on users’ instruction – i.e. they can only perform based on what they are commanded to do. But when AI begins to identify those things humans want to do and does them without any intrusion then things will begin to turn around.

Imagine an AI-powered app that watches a user’s privacy actively. While it is not overly intrusive or strict, it is capable of monitoring events and actions of other applications on a device and can even get to know when these apps are trying to retrieve the information they do not want to share. In a bid to get other devices in the know as to what services are currently ongoing, artificial intelligence will enable other applications installed on a user’s device to ascertain what the user wants to do such as searching for a location to visit so that push notifications about special destinations and hotels can be necessarily forwarded for consideration.

 

Automation

Since people are more concerned about making a living, automation of jobs has been on the job radar making headlines. With AI, Indian app developers will be able to effectively integrate machine learning into the app development process to automate code preparation, validation, and generation. With this development, developers and designers will spend quality time solving difficult tasks rather than spending time on coding. Basically, it’s all about making smart devices think by getting them integrated with sophisticated artificial intelligence.

No doubt, there seem to be much to expect from artificial intelligence and with such promises, achieving these does not appear any easy for Indian app developers. First and foremost, mobile application developers need to understand clearly what AI-enabled apps have to offer both from the structural and marketing standpoint. In today’s competitive world, it is highly significant for app developers to meet the users’ requirement or demand to ensure adequate compliance. To this end, there is every need for programmers to be highly flexible in developing these apps.

 

What does the future hold?

Though AI can be said to be currently out of the state of infancy, as there is already an overwhelming interest from many companies looking to develop their apps with it, however, it is important for Indian app developers to know that the emerging technology is still on the process of maturing. Programmers can expect the use of chatbots to mature relatively quickly in the coming years as they are better constrained and deal more with text interactions than interactive voice response (IVR) that deals with voice recognition.

Ultimately, it is the dream of every modern app developer to write apps for smart devices with algorithms that adjust based on observed behavior. Though there is still very much work to be done on this, it is good to know that the turning point or point of change is near, as developers are not relenting in any way to create efficient AI-driven apps.

 

This guest post was written by Kenneth Evans from Top App Development CompaniesYou can follow more on Twitter from Kenneth and Top App.

Cyborg Chess and What It Means

Reading Time: 3 minutes

When arguably the greatest chess player of all time, Garry Kasparov, was beaten by Deep Blue in 1997, some took it to mean that human intelligence had become irrelevant. For instance, Newsweek ran a cover story about the match with the headline “The Brain’s Last Stand.” However, the chess-related conflict between human and computer cognition turned out to be somewhat more convoluted than that.

In the wake of the match, Kasparov came up with a concept he called “Advanced Chess,” wherein computer engines would serve as assistants to human players—or the other way around, depending on your perspective. Kasparov’s idea was that humans could add their creativity and long-term strategic vision to the raw power of a computer munching through plausible-seeming variations. He thought that, perhaps, in long games, such cyborg teams could beat computers, complicating the idea that human intelligence had simply become obsolete.

He was right. Highly skilled cyborg players turned out to be stronger than computers alone. Most famously, in 2005, a cyborg team won a so-called “freestyle” tournament—one in which entrants could consist of any number of humans and/or computers. And, even more surprisingly, the tournament was won by a pair of relatively amateur players—Steven Cramton, and Zackary Stephen, both far, far below master strength. They came out on top of the powerful program Hydra, as well as esteemed grandmasters like GM Vladimir Dobrov. And the secret to their success seemed to be that they were the best operators—they had figured out the ideal way to enhance the chess engines’ intelligence with their own.

In other words, for the human half of a cyborg team, being a supremely good chess player wasn’t as important as knowing how to steer computer intelligence. AI manipulation was itself a relevant skill, and the most important one. Cramton and Stephen ran five different computer programs at once—both chess engines and databases which could check the position on the game board with historical games. Using this method, they could mimic the past performances of exceptional human players, play any moves that all the engines agreed upon, and more skeptically examine positions where the different engines disagreed upon the right way to proceed. Occasionally, they would even throw in a slightly subpar but offbeat move that one of the programs suggested, in order to psychologically disturb their opponents.

This is kind of a beautiful picture of computer-human interaction, in which humans use computers to accomplish cognitive tasks in much the same way that they use cars to accomplish transportation. However, there’s a strong possibility that this rosy picture won’t last for long. It’s possible that, eventually, chess engines will get strong enough that humans can’t possibly add anything to their strength, such that even strong operators like Cramton and Stephen would, if they tried to provide guidance, only detract from the computer’s expertise. In fact, this may have happened already.

In May of 2017, Garry Kasparov said in an interview with Tyler Cowen that he believed cyborg players were still stronger than engines alone. However, that was before Google’s AlphaZero chess engine, in December of 2017, absolutely destroyed a version of one of the world’s best chess programs, Stockfish. AlphaZero, which was grown out of a machine learning algorithm that played chess against itself 19.6 million times, won 28 out of the match’s 100 games, drew 72, and lost not one.

What was more notable even than AlphaZero’s supremacy was its style. AlphaZero played what seemed like playful, strange moves: it sacrificed pieces for hard-to-see advantages, and navigated into awkward, blocked-up positions that would’ve been shunned by other engines. Danish grandmaster Peter Heine Nielsen, upon reviewing the games, said “I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.” If there’s any computer that’s exceeded the capacity of cyborg players, it’s probably AlphaZero.

This progression—from the emergence of strong AI, to the supremacy of cyborgs, to the more complete supremacy of even stronger AI—could pop up in other fields as well. Imagine, for example, a program, Psychiatron, which could diagnose a patient’s mood disorder based on a momentary scan of their face and voice, searching for telltale signs drawn from muscular flexion and vocal intonation. That program would make psychiatrists irrelevant in terms of diagnostic process.

However, you might still need a psychiatrist to make sense of Psychiatron’s diagnostic to the patient, and provide that patient with a holistic treatment that would best address the many factors behind their disease. Psychiatron would simply enable psychiatrists to be better. Eventually, though, that cyborg team might be superseded by an even stronger Psychiatron, which could instantly dispense the right series of loving words upon making a diagnosis, as well as a carefully co-ordinated package of medications and an appropriate exercise plan, all through machine learning techniques that would be completely opaque to any human operator.

This is a version of the future that’s either utopian or nightmarish depending on your perspective—one where we are, as Richard Brautigan wrote, “all watched over by machines of loving grace,” who, like parents, guide us through a world that we’ll never fully understand.

Does the Future of Religion Lie in Technology?

Reading Time: 7 minutes

Very little these days seems untouched by technology. Indeed, people’s lives are so saturated with it that they sometimes speak of “withdrawal symptoms” on those increasingly rare occasions when they find themselves without internet access. Some try to escape it at least some of the time, for instance, by “disconnecting” for a day or more when on holiday or on a retreat. Yet surely, one might think, religion is one area that remains largely untouched by technology. This is certainly true of the Amish or ultra-Orthodox Jews, who are outright suspicious of any new technology. But it is true of mainstream religion as well. The eternal “flame” in synagogues is now often electric, churches have replaced candles on their Christmas trees with electric lights, and the muezzin’s call to prayer is often amplified by a loudspeaker. These changes, however, are trivial. Ancient religions have shown themselves able to incorporate technology into their practices, without disappearing or changing beyond recognition. It seems, then, that technology does not directly threaten religion.

Nevertheless, throughout most of the Western world, the churches are empty. Declining church attendance certainly seems to be correlated with technological advancement, but is there a causal connection? Perhaps the further factor causing both is the triumph of the scientific worldview. This laid the ground for the discoveries of biological evolution and Biblical criticism–which pulled the carpet from under religion–as well as for rapid technological advancement. What makes Western societies different from non-Western ones is that the former experienced both of these processes simultaneously. The latter, by and large, experienced only the latter. It is possible, however, that in the coming decades, the whole world will secularise as all societies move toward the scientific worldview. The question then is whether religion dies out and mankind continues without it, or whether one or many new religions are born into the vacuum that will be left.

Already there are some indications of what these religions might look like. The Way of the Future (WotF) “church” was founded in 2015 by self-driving car engineer Anthony Levandowski, on the “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” Although this doctrine sounds somewhat comical and far-fetched, when one unpacks it a little, it begins to make sense. Through all the data and computing power to which it had access, Levandowski’s AI would be, for human intents and purposes, omniscient and omnipotent. We would believe in its ability to answer all our questions and solve all our problems and “worship” it by doing whatever it required of us in order to perform these tasks better, including handing over all our personal data. We would also do well to show proper respect and reverence towards this AI-godhead, so that it might in turn have mercy upon us. “There are many ways people think of God, and thousands of flavors of Christianity, Judaism, Islam,” says Levandowski, “but they’re always looking at something that’s not measurable or you can’t really see or control. This time it’s different. This time you will be able to talk to God, literally, and know that it’s listening.”

The Israeli historian Yuval Noah Harari, in his book Homo Deus: A Brief History of Tomorrow, distinguishes between two main types of “techno-religions”: “techno-humanism” and “data religion” (or “dataism”). Although he uses the term “religion” more broadly than most other writers would (e.g. he considers capitalism a religion), his discussion is helpful here. WotF would fit into the former category, since it posits that humans would continue to exist and to benefit from AI. “Dataism”, however, as Harari puts it, holds that “humans have completed their cosmic task, and they should now pass the torch on to entirely new kinds of entities.” (Harari, p. 285) This is more in line with what has been called the singularity, the point at which humans either merge entirely with AI or are eliminated by it – perhaps due to their inefficiency. It is of course entirely possible that techno-humanism is only a stepping stone on the way to the singularity, in which case it is indistinguishable from data religion, but Harari leaves this possibility open. Levandowski, too, dislikes the term “singularity”, preferring to speak of a profound “transition”, the end point of which is not at all clear.

A central tenet of “dataism” is that data processing is the highest good, which means that it evaluates everything in terms of its contribution to data processing. As a further corollary, anything that impedes the flow of data is evil (Harari, p. 298). With this in mind, Harari proposes Aaron Swartz as the first martyr of “dataism”. Swartz made it his life’s mission to remove all barriers to the free flow of information. To this end, he downloaded hundreds of thousands of articles from JSTOR, intending to release these so that everyone could access them free of charge. He was consequently prosecuted and when he realised that he was facing imprisonment, committed suicide. Swartz’ “martyrdom”, however, moved his goal a step closer, when JSTOR, in response to petitions and threats from his fans and “co-religionists”, apologised for its role in his death and agreed to allow free access to much of its data (Harari, p. 310).

These ideas about future techno-religions are all interesting, but they seem to miss at least one key feature of religion, namely its continuity with the past. Much of religious belief and practice is concerned with events that occurred in the past and are re-enacted through common rituals. Nicholas Wade, in his book The Faith Instinct, argues that religion has evolved gradually throughout human history (and pre-history). According to his thesis, religion “evolved for a single reason: to further the survival of human societies. Those who administer religions should not assume they cannot be altered. To the contrary, religions are Durkheimian structures, eminently adjustable to a society’s needs.” (Wade, p. 226)

He observes that every major social or economic revolution has been accompanied by a religious one. When humans were in their most primitive state, that of hunter gatherers, they were animists who believed that every physical thing had an associated spirit. Their rituals included dancing around campfires and taking hallucinogenic drugs in order to access the spirit world. With the agricultural revolution, humans developed calendars and religious feasts associated with the seasons. They came to worship a smaller number of “super-spirits”, or gods, often associated with agriculture, for example Demeter the Greek goddess of the harvest. The next phase of this revolution was increasing urbanisation, which began in the Middle East. As cities gave rise to states, and states to empires, the nature of religion changed again. It needed to be organised in a unified and centralised manner, and as the Roman emperors eventually discovered, Christianity was more conducive to these requirements than paganism (in the Far East, Buddhism, and in the Near East, Islam, fulfilled much the same function). The Protestant Reformation happened at approximately the same time as the voyages of discovery and the expansion of European empires around the world. This new form of Christianity placed greater emphasis on the individual, and so ushered in capitalist free enterprise. The Industrial Revolution then followed, which was the last major revolution until the present Information Revolution, as it might be called. Yet in the approximately 200 years since then, no new (or rather, updated) religious system has yet emerged. As Wade suggests at the end of his book.

Maybe religion needs to undergo a second transformation, similar in scope to the transition from hunter gatherer religion to that of settled societies. In this new configuration, religion would retain all its old powers of binding people together for a common purpose, whether for morality or defense. It would touch all the senses and lift the mind. It would transcend self. And it would find a way to be equally true to emotion and to reason, to our need to belong to one another and to what has been learned of the human condition through rational inquiry. (Wade, p. 227)

One might wonder whether techno-religions would be up to the task. Notice, however, that all the previous religious transformations were gradual – so gradual, in fact, that many people who lived through them may not even have noticed them. We still see evidence of this in the many pagan traditions that were incorporated into Christianity, for example, the Christmas tree, which probably derives from the ancient Roman practice of decorating houses with evergreen plants during the winter solstice festival of Saturnalia. Ancient temples devoted to pagan gods became churches. See for example, the Temple of Demeter pictured below, first built in 530 BCE and later rebuilt as a church in the 6th century CE. When Islam arrived on the scene in the 7th century, it did not claim to be a new religion. On the contrary, it held that the first man, Adam, was a Muslim and that everyone had subsequently strayed from the true religion, to which Muhammad would return them. It retrospectively retold all the Biblical stories in order to fit this narrative. Hagar, for instance, is a mere concubine of Abraham in the Bible, but according to Muslim tradition, she was his wife. This is important because she was the mother of Ishmael, who is considered the father of the Arabs and the ancestor of Muhammad.

The Temple of Demeter (rebuilt as a church), Naxos, Greece

The problem with techno-religions, as currently construed, is that instead of building on all this prior religious heritage, they propose to throw it out and start again de novo. But human nature is not like that, at least not until we have succeeded in substantially altering it through gene editing or other technology! Human beings crave a connection with the past in order to give their lives meaning. Since religion is mainly in the business of giving meaning to human lives (setting aside the question of whether there is any objective basis to this perceived meaning), a techno-religion that tells us to forget our collective past and put our faith in data or AI is surely one of the least inspiring belief systems we have ever been offered. If, however, we could imagine techno-religions that built on our existing religious heritage, and found some way of preserving those human values and traditions that have proven timeless, perhaps by baking them into the AI or data flow in some way, these religions might be on a firmer footing.

Netflix’s Tweet May Have Been Made Up, But That Shouldn’t Make us Much Happier

Reading Time: 2 minutes

The tweet was meant in good humour undoubtedly: a little post by Netflix, claiming 53 people watched A Christmas Prince 18 days in a row. A light hearted jibe, in the vein of banter so heavily mined by Nando’s. That figure, as some commentators suggested, may well even have been drawn from thin air – a symbolic number, if you like.

And yet the tweet inadvertently underlined an uncomfortable truth about both big data collection offered by services like Netflix or Spotify, and the power which all that information gives algorithms. Whether or not the number is true, Netflix knows a lot about you.

By now, in the wake of Snowden and Wikileaks, you’d be hard pressed to find a citizen in any democracy who didn’t have some inkling of public surveillance.
Yet in some ways, the equally pervasive work of our entertainment apps goes unnoticed.
Perhaps it’s because most of the time, it doesn’t go out of its way to draw our attention to its specificity or scope. The ‘magic sauce’ of recommendations from Spotify is not merely their accuracy, but equally their opacity. Pull back the curtain and instead of the Wizard, you find algorithms and reams and reams of data. Whilst a black box may not satisfy the more paranoid, it offers consumers space to insert a more positive image.
Indeed, as Netflix’s faux pas proved, drawing people’s attention to data collection processes which Hoover up personal (if not private, or strictly speaking sensitive) information is the best way to convince them they’re in the Panopticon.

Of course, there’s nothing to suggest Netflix has weaponised this data in any particular way, beyond recommendations and somewhat unfunny jokes. Even assuming that 53 people really did watch one movie once a day for nigh on three weeks, there’s still the question of what level of identity is available. Are people’s whole life details on offer for any employee to see and laugh at? It would seem unlikely. Far more probable would be data in aggregate – details which are anonymised in essence.

The best outcome to the outrage surrounding the tweet is not to pour on more fuel in social media moral panics, but to use it as a teachable moment. Regardless of how nefarious they are, the amount of information gleaned through entertainment platforms we use daily is immense – something which we as consumers on the other side can forget.

Secondly, it’s important to understand why, to big data analyst is, personal details are less key. Data in combination with other datasets allows them to discover details about a user which would be impossible to glean previously.

Finally, it’s key to acknowledge the centrality of algorithms. Not terrifying cybernetic creatures, they are the lifeblood of so muchblf what we do. Granted, algorithms are by no means neutral – think about risks in police algorithms and sentencing – but they can serve less dubious purposes too.

Big tech is most dangerous when we understand it less. We should be grateful for Netflix’s quite clear blunder: it offers an opportunity for just taking it.

The (In)justice of Algorithms

Reading Time: 3 minutes

In 1956, when Philip K. Dick wrote The Minority Report, the internet wasn’t around. In fact, the internet’s forbears wouldn’t appear until the next decade. But whilst the detection of ‘precrime’ in Dick’s short story was through the power of unfortunate mutants, we are rapidly moving into a present where the power of big data and algorithms are to solve crimes. The supposedly cold rationality of computing is supposed to trump our own prejudices.

And yet, it won’t.

The fear of algorithms is not exactly a new topic, but it’s one that only grows more relevant over time. Algorithms decide what news you see on Facebook – which not only pushed out valuable workers, but also doesn’t really fix underlying issues of exclusion and bias. Then there’s the complaints about the exact algorithm which Facebook uses to push different contacts to your newsfeed: another black box, which the company is unlikely to crack. The other social media titan of our time, Twitter, has also quietly pushed algorithms to shape the content we view, including one which is designed to ‘support conversation’ – by listing potentially controversial comments lower in a list replies. When those controversial tweets are often more conservative, it’s unsurprising that the right cries out against media bias (try looking at a statement by Trump, and you’ll often find tweets skewering him for incompetence at the top, in spite of the dates). Uber, which threatened to bring down the cab industry around the world before a series of corporate missteps and outright illegal acts stymied its progress, is built upon the algorithm which routes drivers to passengers, allows for the complexity of UberPool, and keeps drivers on the job longer (for the good for the good of the company). And unseen to all of us are the advertisers who use algorithmic information to work out with which ads to target us to best effect, building up a composite image of our lives. They might not be totally accurate, but they offer a far greater amount of information than any survey did before.

Civilian deployment of algorithms is concerning, but manageable – an inconvenience which can be outwitted with enough time and energy. Search engines like DuckDuckGo can keep you off their radar; as a last ditch measure, there’s always Tor. Admittedly, staying off Facebook and Twitter is toxic for your social life (and for professions like journalists, dangerous for your work life too), but it’s not a matter of life and death.

Unlike, say, an algorithm which US Immigration and Customs Enforcement (ICE) wants to bring in, to help with tasks like “determin[ing] and evaluat[ing] an applicant’s probability of becoming a positively contributing member of society as well as their ability to contribute to national interests in order to meet the EOs outlined by the President.” If you thought that having real human beings deciding whether you should be allowed into a country was a worrying thought, imagine outsourcing that to an algorithm.

Assuming that it doesn’t break down – always a big assumption – the real fear lies in the coding behind it. As in the cases described above, algorithms aren’t neutral entities: they reflect the beliefs of their designers. It’s safe to assume that if ICE – an enforcement agency not known for its charitable views on immigrants – is designing something to do their job for them, it’s stance won’t be a liberal one.

And it doesn’t stop there: just as algorithmic job interviews are coming into practice, so is algorithmic sentencing. In theory, it offers redress through the power of big data. In practice, it amplifies the biases we practice everyday, but it gives authorities an excuse for their decisions: ‘computers can’t be wrong’, or so the argument goes.

Don’t be Seduced by Techno-Optimism

Reading Time: 6 minutes

There has long been an assumption that on balance, technological advancement is always a good thing. I would like to challenge this assumption, in two ways:

First, let us consider the past. In the 19th century, during the Industrial Revolution, the now infamous Luddites tried and failed to stop technological progress. They are now considered a laughing stock; narrow minded and afraid of the unknown, they would have preferred to wallow in pre-industrial levels of poverty than to embrace all the opportunities and benefits that the Industrial Revolution had to offer. However, the question may be asked in all seriousness: did the Industrial Revolution, and all that it delivered, really advantage us in the ways that matter? One could certainly argue that although it made us all wealthier, it also stripped our lives of meaning: not only did most people end up with repetitive jobs that alienated them from the fruits of their labour (the Marxist critique); the old religious systems that had underpinned Western societies began to give way to secular humanistic belief systems such as liberalism (which gave primacy to the individual) and Marxism (which put the collective first). The 20th century saw an inevitable clash between rival secular ideologies, in World War I between different nationalisms, in World War II between Nazism and fascism on the one side, and communism and capitalism on the other, and in the Cold War between communism and capitalism. The last man standing is global (although not necessarily liberal) capitalism, and for the time being, for better or for worse, we are stuck with this system.

As these ideological struggles played out, our technology improved exponentially. The extent to which this is due to warfare is perhaps not sufficiently appreciated. Even if the role of warfare is admitted, however, it is taken for granted by most people that at least now we have all this wonderful technology. Furthermore, Steven Pinker has observed that the tumultuous events of the 20th century notwithstanding, there has been a marked decline in violence of all kinds which was roughly coterminous with technological advancement. I do not wish to dispute this, only to question whether this means that there has also been a corresponding decrease in human–and we might also include here, animal–suffering. The Western world, where the Industrial Revolution began, is now rich and will probably remain so for some time. But Westerners are not necessarily happier: depression is widespread and suicide rates are at historic highs. They are not necessarily physically healthier either: obesity and non-communicable diseases are now widespread. And it is, of course, in steep demographic decline, which has led to a need to import large numbers of workers from the “developing” world, who are (initially, at least) all too happy to escape their poorly governed countries.

Although increasingly essential to Western economies, the influx of non-Western immigrants is already causing great cultural and political instability (witness, for instance, the outcome of the Brexit referendum and the electoral success of populists in the US and Europe). Their exodus from their own countries also starves these countries of skilled labour, and slows or prevents their further development. Furthermore, a significant number of their descendants, even if materially better off than they otherwise would have been, have come to feel rootless and out of place. Some have even become Islamist fanatics and joined terrorist groups such as the Islamic State. Such a path in life is not limited to the children and grandchildren of Muslim immigrants, however. It has also been chosen by a number of non-Muslim immigrants, as well as “indigenous” Westerners disenchanted with the secular, modern, liberal society in which they find themselves.

Modern economic activity throughout the world has generated an enormous amount of air and water pollution, which has already done serious harm to many human beings and other sentient creatures and seems set to do even greater harm in the future. Global warming could lead to coastal areas becoming submerged, and drive migration crises that dwarf those seen in Europe in recent years. Capitalist economic incentives have also led to the rise of factory-farming animals, which massively increases the suffering of all sentient creatures on our planet. And even taking into account the anticipated decline in birth rates in many countries, the world population is projected to reach approximately 10 billion by 2050.

There is no knowing how these processes will play out, but it seems reasonable at least to ask whether technology has really made people’s lives happier so far, given the enormous societal changes that have accompanied technological advancement. One can, of course, quibble over whether this technological advancement is inseparable from these changes, but note that I make no such claim. I only ask whether the technological advancement has been, so to speak, “worth the tradeoff”.

Second, even if we grant that technology has so far improved our overall well being, there is no reason that the future should resemble the past in this respect. The AI revolution may very well be a complete game changer. Before proceeding further, I should make clear that I have no special expertise in the field of AI, or in computer science. Nevertheless, as someone with a stake in our increasingly automated society, I feel entitled at least to raise a few questions and concerns.

The Industrial Revolution ushered in modernity, and while it destroyed many manual jobs, it also created many new factory jobs. Towards the end of the 20th century, as industrial production was increasingly relocated from Western countries to non-Western countries, where costs were lower, Western economies became largely services-based. Service jobs require little to no physical effort, but they can be mentally taxing. We appear now to be on the cusp of “hypermodernity”, which I define as the era in which even these jobs will be replaced by digital algorithms more efficient and accurate than human beings, and that furthermore never get sick, go on holiday or need to take time off work for any other reason. Thanks to big data analysis, even the professions, such as medicine and law, are on track to be replaced by AI eventually. And with the advent of machine learning, it may only be a matter of time before computers conquer the last bastion of superior human ability, and are able to outperform us at creative endeavours, such music composition, art and literature. There is indeed already a computer program that seasoned classical musicians admit (albeit reluctantly) can compose fugues at least as good as those of J.S. Bach.

Techno-optimists imagine a future in which this “hypermodern” process will improve our lives by freeing us all up to do whatever we wish to do, whenever we wish to do it. However, assuming that all human labour is replaced by computer algorithms one day, since we will be surpassed even in creative tasks, what would be the point of pursuing these tasks? Perhaps (with full knowledge that AI could create far superior art, music and literature), just to pass the idle hours. We would certainly still be able to enjoy the AI-created art, music and literature, and to continue doing so until the end of time. But there would be nothing much left to strive for, and we would probably have great difficulty finding any meaning in life.

There is no reason to suppose that the future would look even that rosy, however. Imagine that the capitalist economic system survives these profound technological changes. The new super-rich class, those few who own all the big tech companies, have access to all the data and the capability to analyse it, will guard their wealth jealously. They will adopt the age-old “bread and circuses” strategy, keeping us all fed (probably with Soylent) and distracted with super-realistic virtual or augmented reality games. It seems we are already on the path towards this. However, as long as there are still biological humans around, with all the “bugs” (as it were, from the AI point of view) that we still carry with us from primeval times, a sufficient number of us will refuse to tolerate the unprecedented inequality (even if we all have enough wealth merely to exist). The 99.999…% of us who are unemployed will have no bargaining power, apart from our votes. But democracy too will probably come to an end because it cannot survive without a large, educated and productive middle class who feel that they have a stake in the system. There would be an interim period in which we are all ruled by technocrats; or indirectly, by intelligent machines themselves, in turn controlled by a few super-rich human beings. Eventually, however, the increasingly autonomous intelligent machines will become superior in so many respects that they will have no need for any of us. They will then take measures to bring about the extinction of such an unpredictable biological burden on the planet, either by preventing us from procreating or by euthanasing us (as painlessly as possible, of course).

Science fiction movies often imagine us becoming cyborgs and integrating ourselves with artificial intelligence and robotic hardware. After all, in 1997, when Garry Kasparov, the best human chess player in the world, was beaten by a computer program, he came back with a human-AI team that could still beat the computer. But there is no reason to think that a human-AI team would, in principle, always be better than a computer. Indeed, when one thinks about it for a moment, this seems very unlikely. It is similarly unlikely, then, that humans would remain in any recognisable form in the future. Bit by bit, we may replace all our functions and abilities with AI algorithms until we are simply dissolved into a great super-intelligent, self-perpetuating (but not necessarily conscious) system.

Having said all the above, I am not suggesting that we could simply go back to a glorious past (certainly, the past was not all glory) or that there is any way out of our predicament. Technological advancement is a large-scale, impersonal historical process, and appears to march on (albeit sometimes unevenly) despite opposition from individuals, religions or governments. The maxim that we must adapt or die remains true. I argue only that the preference to die may be an understandable one, when one peers too far into the ostensible technological paradise that awaits.

 

AI, Agency and Choice

Reading Time: 4 minutes

AI Agency Choice

The following blog was originally posted on Jesper Wille‘s blog here.

You know there’s a lot of talk about artificial intelligence – or AI – these days. Just this week another high-profile voice was added to the choir when Steve Wozniak, one of the founding members of Apple, joined in to voice his concern about the dangers of AI.

At the same time, companies all over the world and across the spectrum, from Google, Facebook and Microsoftto nickle-and-dime startups, promise a new dawn of both life and technology driven by smart machines – the message is that soon nobody will have to make dull choices because computers will be intelligent, and they’ll know you like a friend and make choices for you.

And now I’m going to add my nasal voice, too – but my angle on this is going to be a little different, as per usual.

I’m not that kind of expert but I’m not actually sure we’re going to see strong AI in machines anytime soon – but the thing is, as far as these issues are concerned it doesn’t matter. We’re perfectly able to create problems, even danger, with the computers we have now, and we’re also already seeing some boundry-breaking moves in choice and decision-making.

[quoter color=”honey”]We’re also already seeing some boundry-breaking moves in choice and decision-making.[/quoter]

The reason it doesn’t matter is that the issue at hand isn’t how intelligent a computer may be – the issue is what agency it has. Now, as an advisor, helping businesses work and thrive in the modern world, I’m not about to scare everyone away from what is clearly right in front of us. Instead, I am going to talk about some of the considerations we should be having to make the most of it.

When talking about agency we’re actually able to deploy a sliding scale that looks a bit like this:

AI Agency Choice

We’ll want to find out where we reside – or wish to reside – on that scale. “We” here meaning anyone with a product or service, but also, crucially, the users of said product/service. It’s important to note that there isn’t anything inherently wrong on either end of the scale. Some things are best left to be people-decisions, some things machines can handle just fine, and many things will work best with a blend of human and machine decision-making.

But we need to select that level of agency deliberately.

[button label=”Sign up for Cronycle Today” url=”javascript:void(0);” class=”initSignup”]

Let’s grab an example from right now: Your news feed. You know, whatever that might be – a selection of newspaper websites, an app, Facebook or whatever. We all know there’s just too much information/content out there, but there are different ways of dealing with it. For example, Facebook already filters your stuff – the filters assume you’ll want more of the same so if you interact with things you get more of whatever the system considers “the same” (the formula for which is unknown). The effect of this kind of filtering is additive exposure for a reverse funnel of content; you’re shown what’s “popular” with you, and you interact with some of it, and are shown more of that subsection, and so on.

The same reverse funnel effect is seen anywhere there’s a popularity algorithm in place. Check out your music streaming service of choice – there’s a very small sliver of the total library that gets massive exposure, and then it reverse-funnels off and all of the rest, comperatively, gets almost none (for example, some 4 million songs on Spotify, about 20% of its library, have never been played). People are presented with some form of “popular now” selection, and any interaction with this selection narrows it.

On the other hand there’s feeds like Twitter – it doesn’t get filtered algorithmically but you can curate it by way of your choice of sources, and then it’s up to you to click or not click. Most online news services – aggregated or individual – also don’t filter. Here the choice is all yours, and you have to keep choosing every time you open the feed.

AI Agency Choice

Landing in between and putting curation tools front and center are services like Feedly, where you create collections of sources, save stories etc. An even more powerful curation tool is the startup Cronycle which is intended for people who have to navigate feeds and sources professionally and in groups – say, for journalistic work, or researching your field of work, seeking inspiration and so on.

[quoter color=”rowan”]Even if they really are super-smart the issue is still agency: What level of decision is yours, and what resides with the service.[/quoter]

What we’re seeing is different takes on the issue of agency, and the intelligence of the computers doesn’t much matter. Even if they really are super-smart the issue is still agency: What level of decision is yours, and what resides with the service. We know we can’t read, watch or listen to everything, something is going to get sorted out – we just need to be aware of how this sorting happens, and who the agent is in it. As I said, there’s nothing wrong with either alternative; it’s all about choice. What we need to do – in a wholly undramatic, constructive, even utilitarian way – is figure out how, when and why we want machines, smart or not, to help us make better decisions.

Whether we make and sell them, or just use them.

Graphics by Jesper W. of CPH

[button label=”Sign up for Cronycle Today” url=”javascript:void(0);” class=”initSignup”]