This seems a good antidote to the silly fantasies (or nightmares) about AI that animate some benighted folks in Oxford--and it comes from a leading AI researcher (who is, alas, a bit of a philosophical muddle on other topics). An excerpt:
When you work so close to A.I., you see a lot of limitations. That’s the problem. From a distance, it looks like, oh, my God! Up close, I see all the flaws. Whenever there’s a lot of patterns, a lot of data, A.I. is very good at processing that — certain things like the game of Go or chess. But humans have this tendency to believe that if A.I. can do something smart like translation or chess, then it must be really good at all the easy stuff too. The truth is, what’s easy for machines can be hard for humans and vice versa. You’d be surprised how A.I. struggles with basic common sense. It’s crazy....
I’m a big fan of GPT-3, but at the same time I feel that some people make it bigger than it is. Some people say that maybe the Turing test2 has already been passed. I disagree because, yeah, maybe it looks as though it may have been passed based on one best performance of GPT-3. But if you look at the average performance, it’s so far from robust human intelligence. We should look at the average case. Because when you pick one best performance, that’s actually human intelligence doing the hard work of selection. The other thing is, although the advancements are exciting in many ways, there are so many things it cannot do well. But people do make that hasty generalization: Because it can do something sometimes really well, then maybe A.G.I.3 is around the corner. There’s no reason to believe so.
3 Artificial general intelligence, which is like the kind of flexible intelligence we humans have, and which a machine would need to be able to learn intellectual tasks at the level of human beings.
Recent Comments