Technology’s role as a liberating force in the workplace is Dave Coplin’s passion. As Microsoft’s former chief envisioning officer – a job he cheerfully admits he “made up” – his mission was to scan the horizon for emerging tech trends and imagine how they could be applied to the here and now.
In his books Business Reimagined and The Rise of the Humans: How to outsmart the digital deluge, Coplin has evangelised about how technology – all too often portrayed as a threat to the workforce – is in fact paving the way for new forms of collaboration and smarter customer service.
Coplin is now honing and refining the theories and approaches that stem from his lifelong love of technology under the roof of his own consultancy The Envisioners. In May, he appeared at the ACT Annual Conference to explain why he thinks Big Data presents such a rich opportunity to the professional services.
Here, he expands upon his talk and sets out his insights on why treasurers must harvest as much data as humanly possible – and crunch it in the most effective ways – if their profession is to blaze a strategic trail into the future…
Traditionally, we’ve only used data to reflect upon the past, rather than help us work towards the future.
From a treasury standpoint, we’ll look back at figures from the past year, or quarter, or whatever the relevant period may be. And we’ll infer from that data why particular things have happened to the business.
What’s occurring now, though, is a pivot – one fuelled by data, with artificial intelligence (AI) as the engine. Or, more specifically, machine learning. We’re moving away from reflecting upon the past, and towards using data to accurately predict the future.
This will help us forecast not only where our firms are going to be in ‘x’ months’ or years’ time, but – outside the treasury function – what kind of new business models may emerge in our sectors. Or even new and different approaches to impressing customers.
In an ideal world, you want data to be veritably sloshing around organisations, so their staff have as much to work with as possible
The important point here is that treasurers need to start making that move themselves, so that they stop using data as a reflection point and start harnessing it as a strategic asset.
Each treasurer who goes through this pivot needs to build a sustainable, renewable supply of data. Equally, they must mull over the following thought process: “In order to tackle the questions I’m looking to answer, do I have the correct type of data? Do I have enough of it? Is it in the right place? And is it secure?”
Ironically, this message sort of bumps up against the watershed we’re experiencing at the moment with the General Data Protection Regulation (GDPR). Don’t get me wrong – it’s an important step, and privacy should definitely be on the radar of all the major tech firms.
But the issue is that measures such as GDPR scare people. So those people become more conservative in their handling of data. In an ideal world, you want data to be veritably sloshing around organisations, so their staff have as much of it to work with as possible.
I think in the first instance we’re going to see a knee-jerk overreaction to GDPR – then, in time, our relationship with it will mature, and people will become a bit more confident about which types of data they’re happy to release into the wild.
A basic principle of machine learning is that, for any algorithm to work, it has to be trained. It learns from the data you yourself provide. And what we’ve noticed is that there’s an almost exponential relationship between the efficiency of the algorithm and the quantity of data it’s been fed.
One example that comes to mind from my Microsoft days is when the firm worked on the grammar-check highlighting tool for Word – you know: that blue, wiggly underline that drives everyone mad. While they were designing the algorithm that triggers the underline, they couldn’t get it to be any more than about 75% accurate.
Someone had the brainwave, “Why don’t we stop tinkering with the algorithm, and instead change the amount of data we feed it?” So the quantity of text that the team used to train the machine went up from a million words to 10 million, and then to 1 billion. And, without touching the algorithm, its efficiency went up to 95%.
Another, quite provocative, example I like to use at events is that, when you gather up all of the relevant impact data about the effects that different car models have on the environment, hulking gas guzzlers such as the Land Rover Defender actually turn out to be better for the planet than the apparently progressive and polite likes of the Toyota Prius. Believe it or not, more than 67% of every Land Rover built in the past 70 years is still on the road.
In a third example – which again dips back into my Microsoft days – a cardiology ward in a hospital in Washington, DC noticed that it had an abnormally high number of patient readmissions after heart surgery. The staff couldn’t figure out what was going on, even after asking several top cardiologists. So they came to us to get the data scientists on the case.
After crunching a mass of data related to a range of potential factors, the team concluded that patients who suffered from mental illness – specifically depression – would go on to have complications from heart surgery. To this day, nobody knows why that should be the case. But that’s the reality.
The takeaway here is that large quantities of data, processed effectively, can present you with answers that may initially feel counterintuitive, or even wrong, but nonetheless hold insights that you wouldn’t have been able to glean yourself.
Crucially, those data-crunching exercises sweep your bias, prejudice and assumptions out of the way. It doesn’t really matter what the subject area is, because the technology is adaptable. But there’s certainly potential for making course corrections if your instinct tells you that, say, the FX scene will go in a particular direction – but the well-trained tech tells you the market’s going to do something completely different. And that’s when data becomes a strategic tool.
Yes – a couple, actually. In the first example, Rolls-Royce used the predictive power of data related to its jet engines to move to a world where it’s selling flight time, rather than the engines themselves.
Under Rolls-Royce’s new system, you as an airline subscribe to the jet engine, and you pay for the blocks of time in which that hardware keeps you aloft. What AI and predictive analytics enabled the firm to do was get really detailed and efficient about the maintenance of those engines. This was important, because the timing of when they carry out their fixes essentially props up the entire subscription model.
In the second example, lift manufacturer thyssenkrupp Elevator came up with a similar idea. If you’re a hotel in Manhattan, the last thing you want is your lifts packing up. So again, thyssenkrupp Elevator uses a predictive algorithm to work out when a lift is likely to break down.
On the basis of what the analytics reveal, the company sends technicians along to the properties where its lifts are installed, and slot all the maintenance work into quiet times for building use, minimising disruption to the point where it’s practically invisible.
In my own sphere of work, I’m on the board of the Mitchells & Butlers pubs and restaurants firm. One of the things we’re looking at is using predictions based upon weather forecasts, sports fixtures and historical performance to forecast labour demand. This, we hope, will enable us to come up with staffing schedules that are far more accurate than the whole “stick a wet finger in the air and guesstimate how many people we’ll need in which outlets this week” sort of approach. There’s big money to be saved with that kind of ability.
In the treasury world, imagine being able to predict, say, currency fluctuations. What would that look like? This isn’t science fiction. These capabilities are very, very close now – firstly because the amount of data we have is increasing all the time, and secondly because Google, Microsoft and, to a lesser extent, Facebook are making the algorithms we need to process all that data accessible.
This is more complex to answer than it may seem. It would be really easy for me to say, “Oh, corporate treasurers are just stuck to their ledgers and their bits of paper and 19th-century working habits, and what they need to do is wake up and plug into cutting-edge, sophisticated technology.”
However, I think treasurers know that already – and are pretty keen to move on voluntarily. But it’s not so much a matter of arming themselves to the teeth with gadgets and software as it is about undertaking a subtle shift of perspective.
It’s critical for treasurers to bear in mind that their external and internal customers’ experiences of technological change are very much shaped by their interactions with clients and end users. So that will automatically affect those external and internal customers’ views of how much innovation they should reasonably expect treasurers to deliver. Very often, the answer will be “a lot”.
As such, treasurers must continually re-evaluate their jobs in the light of two questions:
Those questions will help treasurers to work out how to innovate within the context of their jobs, and think about which new skills they will need to add. So, essentially, treasurers of the future will be constantly curious and always open to change.
It’s also vital for treasurers to ensure that they know exactly which questions they want to ask with all the new technologies that will be at their disposal.
The worst-case scenario is to buy in technology without knowing precisely what you want to do with it. So, go in with crystal-clear thoughts on which problems you are looking to solve, and the choice of technology will flow from that.
To book your place at the ACT Annual Conference 2019, click here.
Matt Packer is a freelance business, finance and leadership journalist