

The thing I find amusing here is the direct quoting of Gemini’s analysis of its interactions as if it is actually able to give real insight into its behaviors, as well as the assertion that there’s a simple fix to the hallucination problem which, sycophantic or otherwise, is a perennial problem.














My gut response is that everyone understands that the models aren’t sentient and hallucination is short hand for the false information that llms inevitably and apparently inescapably produce. But taking a step back you’re probably right, for anyone who doesn’t understand the technology it’s a very anthropomorphic term which adds to the veneer of sentience.