“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.”
A powerful quote, one that you might think was fresh from the lips of Elon Musk or Stephen Hawking, but no. It’s from 1950 and from the father of computing Alan Turing. A quote that sums up a few things, one that this argument is still as relevant today as it was back then, but also that it really hasn’t moved on much. Or has it?
It was certainly the main theme of the discussions at World AI Summit which I attended in Amsterdam last week. It was an event that managed to gather many of the greatest minds and leaders in the field of research and applied AI – with global leads from Google, Uber, Alibaba, Tencent, Netflix, Amazon, IBM and Facebook participating.
A sidebar worth noting that it’s the monsters of the internet that have sucked up all the talent. AI is fuelled by data and they collect much more than most. This trend however is creating a brain drain from academia, and with these teaching professors now working in the valley – who’s left to teach the next generation?
The main topic of conversation, and the heart of many of the presentations, was focussed on the journey to Artificial General Intelligence – or Superintelligence. It’s argued by many to be the most important question of our generation. How do we get to the point at which we have created generalised intelligence – able to be applied to many different problems (as opposed to today’s versions which are highly specialised in just one task) and what do we do when we get there? Both the reality of this concept and the timing of its realisation provoke a great debate. What’s surprising is just how wide-ranging the views and opinions are from within the top of the industry and academia. With such divergent opinions it means, to my mind, it’s an industry with little certainty so any of the claims – both on timescale and impact – could be possible. So be prepared.
As I listened, discussed and debated with attendees I’ve landed on the following 5 key issues/trends the AI industry is facing:
A diversity problem at so many levels. From sex, ethnicity, socio-economic and in skills. A real lack of female members in the audience made the point quite clear, but also the fact it’s largely the U.S. and some Chinese companies that are leading the charge. How do we know they have the world’s best interest in mind? Further, it was clear that this cannot be left to the math-men (and women) alone, this needs to be a truly multi-disciplinary field – calling on arts, philosophers, linguists, etc
AI is still a bit like teen sex. Everyone thinks everyone else is doing it, so they say they’re doing it even if they’re not really doing it. Most deployments of AI today are not actually AI, and it’s fallen in the camp of buzzwords. Many companies starting out are including AI in their company name or key proposition – something a number of the investors were tired of, and challenged them to justify the term. You are not an AI company unless you are building AI they said.
The energy gap. It’s great we can beat humans at Go, but the human machine is an efficient one. We need about 2,000 calories to power through the day where we absorb and process so much data and solve many different problems. A computer programme currently is about 2,000,000x less efficient and can only focus on one task… So what’s the point?
Who has our interests at hand? The industry recognises the need for better governance and standards, but it must acknowledge that the current systems of governance (Government and regulation) are inadequate as they can’t keep pace with the pace of change, or attract the talent required to actually understand what’s going on. There were calls from many for a more centralised, global approach to setting standards and governing the development of this important technology.
GMO concerns. A real fear that lack of understanding by those in places of power as well as the “if it bleeds it leads mentality” of the national media. There are concerns that AI might become the next GMO – setting back dramatically the potential positive impacts of this technology.
I won’t go into the ethics discussion, which is an essay or book in its own right, but there is a real chance the industry will continue to surprise itself with the pace of breakthroughs and many of the world’s greatest minds are working on trying to get there. So much like nuclear fusion where the breakthrough, often thought impossible or a generational discovery, happened over night – the same could happen with AI and it could happen tomorrow, in 5 years, in 10 or a 100. But with so many people working in this field it looks like it will happen.
There was a question posed by the day 2 headliner Stuart Russell from Berkley. He closed his presentation, which looked at the challenge of setting goals within technology and the need to rethink the current math to create AI that’s designed to be beneficial to humanity at all costs, with the question “What if you succeed?” He urged anyone working in AI that was unable to answer this question to stop and restart.
When asking people why, many I asked or spoke to struggled to articulate a clear answer, often focussing on a belief that a superintelligence AI will ultimately make the world a greater place for all, end all disease, find new life, and end poverty. But what’s clear, it seems, is if you’re in this field it’s seen as an Everest. A summit in sight, that’s waiting to be summited. My concerns, mirror this analogy, going up is always easier than coming down. When we reach the summit, there might not be a way down.