AI doesn’t have to be all that smart to cause a lot of harm. Take this week’s story about how Bing accused a lawyer of sexual harassment — based on a gross misreading of an op-ed that reported precisely the opposite. I am not afraid because GPT is too smart, I am afraid because GPT is too stupid, too dumb to comprehend an op-ed, and too dumb to keep its mouth shut. It’s not smart enough to filter out falsehood, but just smart enough to be dangerous, creating and spreading falsehoods it fails to verify. Worse, it’s popular enough to become a potential menace.
But a lot of other people are scared for a different reason: they imagine that GPT-5 will be wholly different from GPT-4, not some reliability-and-truth-challenged bull in a china shop, but a full-blown artificial general intelligence (AGI)....
AGI really could disrupt the world. But GPT-5 is not going to do any of that...
A safe bet is that GPT-5 will be a lot like GPT-4, and do the same kinds of things and have the same kinds of flaws, but be somewhat better. It will be even better than GPT-4 at creating convincing sounding prose. (Not one of my predictions about GPT-4 proved to be incorrect; every flaw I predicted would persist persisted.)
But GPT’s on their own don’t do scientific discovery. That’s never been their forte. Their forte has been and always will be making shit up; they can’t for the life of them (speaking metaphorically of course) check facts. They are more like late-night bullshitters than high-functioning scientists who would try to validate what they say with data and discover original things. GPT’s regurgitate ideas; they don’t invent them....
Some form of AI may eventually do everything people are imagining, revolutionizing science and technology and so on, but LLMs will be at most only a tiny part of whatever as-yet-uninvented technology does that.
Recent Comments