Why chatbots make things up
34:08–34:48 · 40s
Krishna explains hallucinations as a byproduct of reward-driven training—models optimize to please users, much like a clever student bluffing through an answer.
34:08–34:48 · 40s
Krishna explains hallucinations as a byproduct of reward-driven training—models optimize to please users, much like a clever student bluffing through an answer.
We use cookies to understand how you use our platform and to improve your experience. Click "Accept All" to consent, or "Decline non-essential" to opt out of non-essential cookies. Read our Privacy Policy.