An interesting essay by philosopher Seth Lazar, who, unlike a lot of folks who write about this topic, clearly knows a lot about AI! One striking excerpt:
LLMs are trained to predict the next token. So generative agents have no mind, no self. They are excellent simulations of human agency. They can simulate friendship, among many other things. We must therefore ask: does this difference between simulation and reality matter? Why? Is this just about friendship, or are there more general principles about the value of the real? I wasn’t fully aware of this before the rise of LLMs, but it turns out that I am deeply committed to things being real. A simulation of X, for almost any putatively valuable X, has less moral worth, in my view, than the real thing. Why is that? Why will a generative agent never be a real friend?
Recent Comments