i don't know if it's a convention even in the "serious" AI research industry to use anthropomorphic jargon, but it drives me up a wall to see shit like this:
17.6 Theory of Mind Limitations in Agentic Systems
Agentic systems don't have "theory of mind", they cannot infer mental state. they are probabilistic word generators operating within non-deterministic frameworks. They can have a system prompt that tells them to generate text that appears to be an interpretation of another entity's "mental state", and they can even be directed to refer to it as context, but it is not theory of mind and the entity they're generating in reference to may not have a mind at all.
I wish there was some way to stop these dorks from stealing the imprimatur of cognitive science.
this is nearly as dumb as elon's "show me your 5 best lines of code" shit while he was err, downsizing twitter. What are you supposed to do when a code review flags some bad code? fondle your prompts repeatedly until that part gets fixed? Sounds like a solution that will often be much less efficient than making edits by hand. Maybe they just don't do code reviews now, that would be cool.
It seems clear that every single company that makes money off of software is or will soon be in a race to the bottom on software quality and that's just amazing, i love it for everyone. I choose to laugh rather than cry.