I came across this paper: "ChatGPT is bullshit" by Michael Townsen Hicks, James Humphries and Joe Slater recently.
I think that the authors are quite right in pointing out and emphasizing the limitation of LLMs, their tendency to generate new text whose referents are not True, i.e. do not exist. I experienced that multiple time with earlier versions of ChatGPT in which it supplied books on the history of ancient Assyria that did not exist; even though the authors existed - to give an example.
(Some, who have used Perplexity, state that they found it to satisfy most of their needs, including pretty sophisticated scientific ones with a lot more integrity than ChatGPT.)
- Please look at this item that I created, interacting with ChatGPT, after watching the TV Series - "Wolfe Hall" - Crimson Reason: Using ChatGPT: Who Poisoned Cardinal Wolsey?
- I used ChatGPT and research produced by another person to create an article in Wikipedia: please see https://en.m.wikipedia.org/wiki/Qian_family_lineage
- ChatGPT could diagnose me, based on my full symptoms, with a cracked tooth, which was confirmed when my tooth actually broke while chewing!
- A Professor of Mechanics told me that ChatGPT "knows" more than him of the Theory of Elasticity; basically, the entire corpus of that field is available to ChatGPT.
- Staff in General Motors have been using it to create skeleton presentations...in some instances. ChatGPT could easily eliminate an entire role/job.