Tweet

Interesting if true but the risks of “hallucinations“ (factual mistakes in AI/LLMs parlance) makes it less reliable than it looks. Still we do all want to find the appropriate explanatory level relative to our current understanding. I just worry about the backend. https://twitter.com/max__drake/status/1643594487971676160

(original)