this post was submitted on 19 Jun 2025
44 points (95.8% liked)
Asklemmy
48857 readers
565 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well, I can't claim to be an expert on the subject, at any rate, but there are plenty of models which are local-only and are required to directly reference the information they interpret. I'd assume a HIPAA-compliant model would need to be more like an airgapped NotebookLM than ChatGPT.
But I would also assume the risk of hallucinations or misinterpretations is the reason why a clinician would still need to review the AI summary to add/correct details before signing off on everything, so there's probably still some risk. Whether that risk of error is greater or less than an overworked resident writing their notes a couple days after finishing a 12-hour shift is another question, though.
If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.
For what it's worth, I don't mean to say that this is something that hospitals and health networks should be doing per se, but that they are doing right now. I'm sure it has benefits for them, as another user somewhere further in this post described, otherwise I don't think all these doctors would be so eager to use it.
I work for a non-profit which connects immigrants and refugees to various services, among them being healthcare. I don't know all of the processes they use when it comes to LLM-assisted documentation, but I'd like to think they have some protocols in place to help preserve accuracy. If they don't, that's why this is on our radar, but so is malpractice in general (which is thankfully rare here, but it does happen).