Post from 2025-08-27 14:49:33

Look at the rate of weasel wording in OpenAI's not-really-apology:

https://openai.com/index/helping-people-when-they-need-it-most/

I'm sick and tired of people pretending they have ways to enforce LLM behavior, while all they do is weigh dices differently - they remain dices.

Trying to enforce security boundaries with a PRNG is one thing, but you definitely can't prevent reinforcing harmful behavior, because you can't even define what it is.

And this can cost lives, as we just witnessed.
permalink | main