By Alistair Hoffert
It’s impossible for anyone today to give a precise vision of how the next ten − much less five − years will unfold in the field of AI. That’s not to say that it’s impossible to make broad predictions about AI’s impact in the coming years and decades.
Our aim here is different: to make specific predictions about AI trends for the next 12 months, then draw out key implications for business, government, and society as a whole. We’re confident in making near team forecasts because these nascent trends are already under way, though they aren’t yet attracting the attention they deserve.
1 AI will impact employers before it impacts employment
We see a more complex picture coming into focus, with AI encouraging a gradual evolution in the job market that − with the right preparation − will be positive. New jobs will offset those lost. People will still work, but they’ll work more efficiently with the help of AI.
Most people have heard that AI beat the world’s greatest grandmaster in chess. But not everyone knows what can usually beat an AI chess master: a ‘centaur’, or human and AI playing chess as a team. The human receives advice from an AI partner but is also free to override it, and it’s the established process between the two that is the real key to success.
This unparalleled combination will become the new normal in the workforce of the future. Most organisations like to set boundaries by putting specific teams in charge of certain domains or projects and assigning a budget accordingly. But AI requires multidisciplinary teams to come together to solve a problem. Afterward, team members then move on to other challenges but continue to monitor and perfect the first.
With AI, as with many other digital technologies, organisations and educational institutions will have to think less about job titles, and more about tasks, skills, and mindset. That means embracing new ways of working.
2 AI will come down to earth − and get to work
Executives think that AI will be crucial for their success: 72% believe it will be the business advantage of the future. The question is: What can it do for me today? And the answer is here.
The value of AI in 2018: it lies not in creating entire new industries (that’s for the next decade) but rather in empowering current employees to add more value to existing enterprises. That empowerment is coming in three main ways:
- Automating processes too complex for older technologies
- Identifying trends in historical data to create business value
- Providing forward-looking intelligence to strengthen human decisions
3 AI will help answer the big question about data
Many companies haven’t seen the payoff from their big data investments. There was a disconnect. Business and tech executives thought they could do a lot more with their data, but the learning curve was steep, tools were immature, and they faced considerable organisational challenges.
Many kinds of AI, such as supervised machine learning and deep learning, need an enormous amount of data that is standardised, labelled, and ‘cleansed’ of bias and anomalies.
Consider a typical bank. Its various divisions (such as retail, credit card, and brokerage) have their own sets of client data. In each division, the different departments (such as marketing, account creation, and customer service) also have their own data in their own formats. An AI system could, for example, identify the bank’s most profitable clients and offer suggestions on how to find and win more clients like them. But to do that, the system needs access to the various divisions’ and departments’ data in standardised, bias-free form.
It’s rarely a good idea to start with a decision to clean up data. It’s almost always better to start with a business case and then evaluate options for how to achieve success in that specific case.
4 Functional specialists, not techies, will decide the AI talent race
As AI spreads into more specific areas, it will require knowledge and skill sets that data scientists and AI specialists usually lack.
Consider a team of computer scientists creating an AI application to support asset management decisions. The AI specialists probably aren’t experts on the markets. They’ll need economists, analysts, and traders working at their side to identify where the AI can best support the human asset manager, help design and train the AI to provide that support, and be willing and able to use the AI effectively.
Enterprises that intend to take full advantage of AI shouldn’t just bid for the most brilliant computer scientists. If they want to get AI up and running quickly, they should move to provide functional specialists with AI literacy. Larger organisations should prioritise by determining where AI is likely to disrupt operations first and start upskilling there.
5 Cyberattacks will be more powerful because of AI − but so will cyber-defence
What’s one job where AI has already shown superiority over human beings? Hacking. Machine learning, for example, can easily enable a malicious actor to follow your behaviour on social media, then customise phishing tweets or emails − just for you. A human hacker can’t do the job nearly as well or as quickly.
Intelligent malware and ransomware that learns as it spreads, machine intelligence coordinating global cyberattacks, advanced data analytics to customise attacks − unfortunately, it’s all on its way to your organisation soon. AI itself, if not well-protected, gives rise to new vulnerabilities. Malicious actors could, for example, inject biased data into algorithms’ training sets.
In other parts of the enterprise, many organisations may choose to go slow on AI, but in cybersecurity there’s no holding back: attackers will use AI, so defenders will have to use it too. If an organisation’s IT department or cybersecurity provider isn’t already using AI, it has to start thinking immediately about AI’s short- and long-term security applications.
6 Opening AI’s black box will become a priority
In the past, for example, to teach an AI program chess or another game, scientists had to feed it data from as many past games as they could find. Now they simply provide the AI with the game’s rules. In a few hours it figures out on its own how to beat the world’s greatest grandmasters.
Instead of playing chess, an AI program with the right rules can ‘play’ at corporate strategy, consumer retention, or designing a new product.
What happens when AI-powered software turns down a mortgage application for reasons that the bank can’t explain? What if AI flags a certain category of individual at airport security with no apparent justification? How about when an AI trader, for mysterious reasons, makes a leveraged bet on the stock market?
Users may not trust AI if they can’t understand how it works. Leaders may not invest in AI if they can’t see evidence of how it made its decisions. So, AI running on black boxes may meet a wave of distrust that limits its use.
We expect organisations to face growing pressure from end users and regulators to deploy AI that is explainable, transparent, and provable. That may require vendors to share some secrets. It may also require users of deep learning and other advanced AI to deploy new techniques that can explain previously incomprehensible AI.
7 Nations will spar over AI
AI is going to be big: $15,7 trillion big by 2030, according to our research. The AI pie is so big that besides individual companies, countries are working on strategies to claim the biggest possible slice.
National competition for AI will never cease − there’s too much money at stake. But we do expect growing opportunities, facilitated by the UN, the World Economic Forum, and other multilateral organisations, for countries to collaborate on AI research in areas of international concern.
8 Pressure for responsible AI won’t be on technology companies alone
Seventy-seven per cent of CEOs in a 2017 PwC survey said AI and automation will increase vulnerability and disruption to the way they do business. Odds are good that if we asked government officials, the response would be similar.
Leaders will soon have to answer tough questions about AI. It may be community groups and voters worried about bias. It may be clients fearful about reliability. Or it may be boards of directors concerned about risk management, ROI, and the brand.
We’re not alone in this belief. The World Economic Forum’s Center for the Fourth Industrial Revolution, the IEEE, AI Now, The Partnership on AI, Future of Life, AI for Good, and DeepMind, among other groups, have all released sets of principles that look at the big picture: how to maximise AI’s benefits for humanity and limit its risks.
Alistair Hofert is a PwC Cognitive and Intelligent Automation Lead.
This article was originally published in ASA.