As we stand somewhere past the start-line of the so-called 4th industrial revolution, with no sight of any finishing line or what it might even look like, I headed to World Summit AI recently looking for some clues to where things are headed.
A two day gathering of the foremost thinkers from both academia and industry, the summit attracts experts in AI, in all its shapes and forms, from the likes of Google, Alibaba, Microsoft, IBM, and Yandex (the Google of Russia) among others.
Much had changed in a year (I was also there in 2017).
Last year’s event saw much soul-searching and a need to defend the discipline in light of attacks from the likes of Musk and Hawking, which shaped the conversation along the lines of “When will we reach the singularity?” and “Will a general AI take over the world some day?” It was peak hyperbole.
The discussion this year was much more grounded. It was real and it felt like the community as a whole had decided to come together to plant the narrative in the here and now, and add a healthy dose of realism into the future. “It’s just software,” was the common theme. There was a sense of narrative control to ensure the field isn’t held back with “Terminator” doomsday headlines, outlandish privacy concerns and over-egged predictions of job loss. Let’s get on and show you what it is and what it can do.
Some key themes and take-outs:
AI is already here and all around us. Every speaker was keen to make this point. We are not waiting for AI to get here, it’s here, it’s pervasive and if you’re a business without an AI strategy it’s like being a business without an internet strategy in 1995. There is a small window available for competitive advantage before it becomes commodity.
Privacy concerns are holding the field back. Opinions do differ, but largely the industry sees the negative narrative around privacy concerns as overblown and dangerous. The more data available, the better the solutions provided enhance lives and society. In the health sector, this is costing lives, as it’s being gravely under deployed, so said Luciano Floridi from Oxford University.
AI is democratising. You don’t need to be a data scientist or an engineer to make use of AI or Machine learning today. All of the big cloud players (Amazon, Microsoft, Google) provide APIs to AI tools in their clouds, or models can easily be found on GitHub. Much like a business doesn’t need to have software engineers to run software these days, the same will increasingly happen with AI.
We need explainable AI. Anything less is reckless. We need to be able to understand how the models work and how they got to their decisions; without this it’s dangerous and irresponsible. After all, AI doesn’t do context but looks for correlation; it also can easily become biased if the data sets it uses were biased to start with. Humans need to be able to understand the decisions through the lens of context and bias. For example, an AI might reasonably predict that umbrellas cause rainfall due to the correlation of usage; rather than the other way around. We’ve already seen in the last week Amazon had to pull one of it’s recruitment AIs because it was biased to females. (Link: https://www.fastcompany.com/90249309/amazons-hiring-ai-may-have-weeded-out-women-report)
Stop making AI a person, it’s a tool. Talks in the legal profession of perhaps needing to give AI agency, and potentially make a legal person is foolhardy. We don’t need new laws, we have them already. Liability of fault can clearly be mapped to the maker or the user, not the tool itself. Let’s not over-complicate this.
Diversity is crucial. On many fronts. There is no one-person that can create AI solutions; it’s a team sport and a diverse team is needed – data science, engineering and business translation etc. Equally, diversity as a means to reduce bias – gender, age, sex, economic background etc, is important.
It should create (better) jobs. The job loss concerns are predicated on belief we are anywhere near “peak productivity”. As one speaker pointed out, we couldn’t be further from the truth. There are so many big issues and opportunities that require our time, or work that is important that we don’t currently attach correct economic value to (care work etc). If AI can improve productivity, create more time and wealth, then we’ll have time to focus on the stuff that matters – like climate change perhaps?
The AI Nation Wars are B.S. The notion of an AI race between nations was rebuffed by experts; this couldn’t be further from the truth they said. The whole industry is driven through academic research, publishing of findings and collaboration. Many of the technologies and models are also open source. The war is being played out on the applied side at the corporation level – competing for talent, customers and their data.
And finally… AI needs a better name. A field that has lived with a name coined 60 years ago; the name as a descriptor is what’s causing much of the over-inflated angst and concerns. This isn’t about building actual “intelligence” it’s ultimately about software getting smarter.