Why corporate leaders should reflect more seriously about AI

Artificial Intelligence, it seems, is the overhyped technology of our age, says the FT’s John Thornhill in this podcast, ‘AI the new frontier.’
Paul Lewis
Jun 06, 2018

When Google’s CEO says that AI might have as profound an affect on human development as fire, one might be forgiven for dismissing the hubris. All a company needs to do nowadays is slap an AI reference on a presentation and, hey presto, they are seen as cutting edge. And yet, Thornhill and other experts ask if we might be underestimating the long-term impact of AI.

It wasn’t long ago that search engines, digital assistants, driverless transport, radiology scans, credit checks, for example, were on the wilder fringes of corporate strategy. Today, much of AI’s low-hanging fruit and talent has already been captured by the likes of Google. Algorithms have improved its ad systems and added immediate value. But even a simple definition of AI as the development of computer systems to perform tasks that humans normally do, have thought-leaders scrambling to consider the implications of everything from mass unemployment to the reconfiguration of our neural networks.

This greater seriousness has arisen because of three recent and converging developments: smarter algorithms, masses of data, and huge computing power. The impact is all around us, in our smart phones and other connected devices.

Ideally, AI should create endless possibilities of human augmentation. Healthcare is a case in point. Guided by the ‘three Ps’ – prevention, personalisation and precision- AI can assess what we eat, our stress levels, detect signs of heart disease, help manufacture bespoke drugs, and much more. But there are no shortages of dystopian visions either, from in-built bias based on poor data and design to low-cost drones with face recognition capabilities used for assassinations.

Amid all this there will be real business, political and regulatory challenges that executives must consider. Will the change wrought by AI be incremental or exponential, and how will society share the gains? Is a robot tax or a universal basic income the answer, or will robot owners take the lion’s share?  Which jobs will go first –drivers seem particularly vulnerable—and what retraining and for what types of jobs will be available? How will surviving workers respond to be being bossed around by an algorithm?

There is also a geopolitical dimension to consider. AI is a strategic priority for China. It enjoys huge data sets, and weak regulation—the perfect setting in which to experiment. Tech companies are hoovering up the data, and AI is being weaponised through hacking and battlefield situations.

Few of us know what exactly is going on.

Paul Lewis

Editorial Director at Headspring