Almost two years ago, Google’s artificial intelligence (AI) company DeepMind created a system called AlphaGo, to take on Lee Sedol: the world champion of the Chinese 19 x 19 square strategy game Go. Note that Go is a rather abstract game.
As Google explained in a blog post on its research, the number of possible moves and board positions is greater than that of the atoms in the universe. So it’s hard enough to teach a person the rules and techniques behind the game – let alone a machine.
With that in mind, Google taught AlphaGo a broad goal – to win – plus some essential learning rules. Then the system watched a series of real Go games to pick up the basics, and trialled moves with human trainers on hand to guide it. If it made a mistake, they’d give it a nudge, and it would change its learning – not unlike the process a novice human would go through.
The result? Sedol lost by four games to one, and in each losing game, he simply gave up. He said that the system was being more creative than him. Not only was it coming up with moves he’d never seen before… it was deliberately forcing him to make mistakes.
This is a good example of what many see as the expected future trajectory of AI – going from smart systems today that learn in a narrow context, to the rise of artificial general intelligence over perhaps the next decade – creating applications that are broadly as smart as humans across all domains of activity. Eventually, we might reach artificial superintelligence, which is smarter than humans, doing stuff that we can’t even grasp – in which case we would struggle to articulate what such a technology might be capable of – we are limited to describing it in terms we understand.
So, what’s the connection between all of this and the future of treasury? If your own strategic thinking has been alert during the above paragraphs, you may already have made the leap…
It seems almost fantastical, but we’ve reached a point where developers are building AI systems, leaving them alone to learn and operate – and then, after checking up on their activities, are saying: “We have no idea how it’s reached that conclusion, or how it’s done what it’s done, but it’s achieving better outcomes than humans performing similar tasks.” AIs are starting to programme themselves, and rewrite their own codes, and, in many cases, we have no idea how they’re doing it.
The upshot of the speed of progress that we can see and the billions of dollars going into developments that are still under wraps, is that anyone who says AI won’t be able to do ‘X’ within a certain timeframe is, almost by definition, going to be wrong. In practice, it seems likely AI will be able to do at least some of every single task that is carried out on this planet.
Crucially, it may not do it the way you or I do it. Indeed, I recently had a lively debate with a doctor friend who said that, when AI replaces him, that will mark the end of human jobs, and he can absolutely see that happening. My view was that AI won’t replace him – but it will replace a lot of the functions that his paymasters are willing to shell out for and that the end customer, the patient, wants to receive. However, it won’t necessarily fulfil those functions in the same ways that he does, and that’s the big culture shock.
In treasury – and every other area of finance – the same applies. As they get ever smarter, AIs will look at the landscape in a totally different way and, in many cases, they’ll change it – changing the games themselves, not just the rules of play. Just as AlphaGo Zero beat its big brother by creating its own notions of how to win, as AIs get smarter, they will redraw the map of how the profession works.
However, we will still need people. Whenever a business has new opportunities, or wants to go off into fresh markets, or is trying to devise different business models for how it will get paid for its goods and services, we’ll need humans to think that through.
As such, we will have to learn how to manage a working environment in which AIs, humans and software robots work side by side. Humans could well probably end up having some sort of augmentation – whether chemical, genetic or electronic – to speed up their thought processes, strengthen their memory retention and enhance their stamina.
Many of those pioneering AI believe that such enhancements are a logical and essential step in our evolution if we want to keep pace with the ever-smarter machines that will surround us.
Perhaps in certain situations AI will be doing the managing, because it will learn faster than everyone else and will have the greatest insights into the situations into which it’s been inserted. So, all of us in the corporate world are going to have to learn, and get used to, new techniques for managing and being managed in the workplace of tomorrow.
That’s just the micro stuff of what happens in our offices, headquarters and remote-working networks. But over and above all that, AI will also have a seismic effect on the macro side of treasurers’ activities: the financial markets.
It’s hard to imagine that anyone remotely close to national and international financial and economic governance systems believes that they’re stable or robust enough for the world that we’re moving into. And anyone who thinks that they are should really be separated from their delusions, because the world is changing far too quickly.
Under the current consensus, we understand that debt is helpful for driving growth. But is it ultimately a destructive model? We saw what happened in the wake of the last financial crisis – countries struggling with debt, individuals struggling with debt – and we know it’s not a net benefit for the whole of society. So we have to come up with cleverer models for financing growth that don’t end up with certain people and societies suffering inordinate hardship.
There will come a point in time within the next five or so years, when AI will be able to examine the global economic and financial scenario and start to come up with different models for how we manage a) financial transaction systems, b) the flow of money around the world and c) the relevant policy levers.
In this situation, AI’s primary advantage is that, while it could be presented with rules that build in human bias, prejudice and political ambition, it could also take a neutral role. In this scenario, it would simply evaluate overall economic goals, examine what’s actually happening in the markets, assess the landscape, look at all the data points and pick out the really critical economic and monetary policy actions that will have the most effective impact for the broadest number of people and countries.
In the process, AI will begin to design new governance, monetary, economic and financial-markets governance systems that none of us could even imagine or understand today. If anyone says that they do understand what AI will create, then they are underestimating its capabilities.
Going further, AI could ultimately generate new branches of science and technology. And if we try to describe that technology in our present-tense environment, we won’t really capture what’s going to happen, because we’re still using our old language. Think of it as similar to the arrival of aliens: if we tried to describe alien technology in our own terms, we’d be missing something.
There are a lot of arguments among the AI community about how quickly – if ever – we will reach artificial general intelligence, or artificial superintelligence – others suggest that both could be upon us in less than 20 years.
On the face of it, that may sound scary: AI will take control away from the people who currently have it. However, the current system is pretty scary, too, because most people don’t have control.
We have a system in which – according to the latest World Inequality Index – the world’s richest 0.1% have boosted their combined wealth by as much as the poorest 50% since 1980. That’s a thousandth of the world’s population accumulating as much as 3.8 billion people.
Now, that’s clearly an inefficient operation of the free markets: those high-net worth individuals can’t possibly spend the kind of money that the billions of people on the other end of the scale need to spend – so they can’t have the same economic impact. They can’t cycle their cash around the economy as quickly, so therefore their accumulation of wealth isn’t going to yield commensurate human progress or benefit.
This isn’t a communist or socialist argument – it’s a sheer, economic question: what’s the best way of moving money around the global economy to drive growth, employment and wealth creation for all, so that everyone can improve their lifestyles?
Answering that question will probably require a higher intelligence than we currently possess. But that intelligence may well be created by the machines we’re building. So, on the one hand, yes – AI is very daunting. But on the other, it may be our best hope… the key to creating smarter, fairer and more efficient, workable and transparent systems for governing our planet.
Rohit Talwar is a global futurist, strategy adviser, keynote speaker and co-founder of Fast Future Publishing