Though commonly reported, Google doesn't consider it a security problem when models make things up
To be clear, all llms "make things up" with every use - that's their singular function. We need to stop imparting any level of sentience or knowledge onto these programs. At best, it's a waste of time. At worst, it will get somebody killed.
Also, querying the program on why it fabricated something as if it won't fabricate that answer as well is peak ignorance. "Surely it will output factual information this time!"
