Our first also needed a frenotomy, and we had to go to a specialist clinic outside the hospital.
My best understanding of that situation was that they first wanted to make sure it was actually a problem over a week or two of observation. Then, the procedure was technically classified as dental surgery, so a doctor at a hospital couldn't perform it for professional/ethical/insurance reasons.
I realized a while back that one of the primary goals of these LLMs is to get people to continue using them. While that's not especially notable - the same could be said of many consumer products and services - the way in which this manifests in LLMs is pretty heinous.
This need for continued use is why, for example, Google's AI was returning absolute nonsense when asked about the origins of fictitious idioms. These models are designed to return something, and to make that something pleasing to the reader, truth and utility be damned. As long as the user thinks that they're getting what they wanted, it's mission accomplished.