This is funny and probably right (and timely, given yesterday's thread):
The cognitive scientist Gary Marcus has been the most active when it comes to showing that, contrary to some of the claims referenced above, LLMs [Large Language Models] are inherently unreliable and don’t actually exhibit many of the most common features of language and thought, such as systematicity and compositionality, let alone common sense, the understanding of context in conversation, or any of the many other unremarkable “cognitive things” we do on a daily basis. In a recent post with Ernest Davis, Marcus includes an LLM Errors Tracker and outlines some of the more egregious mistakes, including the manifestation of sexist and racist biases, simple errors when carrying out basic logical reasoning or indeed basic maths, in counting up to 4 (good luck claiming that LLMs pass the Turing Test; see here), and of course the fact that LLMs constantly make things up.