I am so torn on this. On the one hand, this is sorely needed. Not everyone in an organization is aware that information they feed to an LLM is often fed back into the LLM to train it, or that the LLM reflects bias in its training material, or even that an LLM might return false information. A strong policy is important to help communicate these things.
On the other hand, LLMs have had the biggest impact on my productivity, well, ever. The speed with which I can write complex documents, write code, or help someone else solve a problem has been increased by an order of magnitude. My organization recently enacted something similar to this executive order, prohibiting the use of LLMs. Now I can't get those advantages at work anymore. I'm afraid a heavy-handed policy would effectively strip government workers of the same capabilities.
In 2005 or so, I got a tip about an application called LaunchBar, which would later be copied by Apple to replace the Sherlock search tool, and later by Microsoft in its PowerToys suite. The machine learning LaunchBar used to tailor its responses based on my previous behavior was life-changing. Instead of configuring an application, I just had to use it to change how it behaved.
This is how language models and AI are going to improve your products. Subtly. Behind the scenes. Slightly improving a thousand different use cases, only a fraction of which your regular usage patterns are going to intersect with.