|
I am getting pretty sick of AI hype. It's not that I think AI is without value or a mirage or anything, but I think people are getting weirdly reluctant to challenge even the most obviously nonsensical claims about the industry. But, since apparently
others don't seem to want to play skeptic on this, I guess I'm "it". So here goes:
My critique of current AI-mania is basically three-fold.
- The Term Artificial Intelligence is Being Stretched Beyond Meaningful Use
This is a point maybe best made
by Ian Bogost in the Atlantic a few months ago. The term "artificial intelligence" used to mean something sentient, self-aware, or at the very least autonomous. Nowadays it means nothing of the sort; pretty much anything operating on
an algorithm analyzing large amounts of data gets described as AI. It's a term that companies use to hype their new products - but usually, when you see the term "AI" there is very little intelligence actually involved. Or even necessarily very much learning.
It's often just a label slapped on an ordinary tech project for the purpose of attracting the attention of VCs.
2. Lab Success Does Not Equal Commercial Success.
A few weeks ago on twitter my friend Joseph Wong of the University of Toronto pointed out that the way we are talking about and hyping AI bears more than a passing resemblance to the way we talked about biotech in the early 1990s (and he should know, having
authored
a rather excellent book on the subject). There have certainly been some impressive recent gains in the way machines learn to recognize specific types of patterns (in particular, visual ones) or tackle certain types of very specific problems (beating
humans at Chess, Go, etc.). These are not trivial accomplishments. But getting from there to actual products that affect the labour market? That's totally different.
I think you can basically boil the problems with putting useful AI on the market into two broad categories. The first is a familiar one in computer science and it's called GIGO - or "garbage in, garbage out". Basically, the promise of AI is that
if you give a particular type of algorithm a large enough data set, it will begin to see and identify correlations sufficiently well as to identify/predict X based on Y at least as often as a human does. But that assumes that Y is actually labelled correctly.
In computerized games, that's not a problem. Up to a certain level of detail on images, it's not a problem. But that's not always the case; many data sets are messy or highly contextual. Remember the story about the AI
program that could "tell" a person's sexuality from a photograph? Turns out the photographs were from a dating site, a rather restricted sample wherein people presumably had a reason to play up certain aspects of their sexuality. Would the same AI work
with passport photos? Probably not. But a lot of AI hype is based on computers using contextualized data like this and assuming it will work in real-life, free-flowing, non-contextualized situations. And the evidence for that is not especially good.
The second is simply that even if you have clean data sets and you have AI that recognizes patterns well, the fact is that no one is even close to have come up with pattern recognition that works in more than one domain. That is to say, there is
no such thing as "generalized" artificial intelligence. You basically have to train the little bastards up from nothing on every single new type of problem. And that's a problem as far as commercialization is concerned. If we have to pay a bunch of computer
scientists a crapload of money to design and clean databases, work out and laboriously verify the accuracy of pattern recognition in every individual field of human endeavour, the likelihood of commercializing this stuff in a big way anytime soon is pretty
low. (Kind of like biotech in the 90s, if you think about it).
And that's before you even stop to think about the legal ramifications, which I think is a vastly underrated brake on this whole thing. Remember five years ago when people predicted millions of autonomous vehicles on the streets by 2020? One of
the main barriers to that is simply the fact that legal liability in the case of autonomous agents is unclear. If a driverless van sideswipes your car, who do you sue? The owner? The manufacturer? The software developer? It could be decades before we
sort this stuff out - and until we do there is going to be a massive brake on AI commercialization.
|