this post was submitted on 19 Jun 2025
44 points (95.8% liked)
Asklemmy
48857 readers
594 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The history is all there, the chart should contain each granular instance of what is done in exacting detail. The summary is just one of those obligatory elements of patient interactions covering the key details of a visit because no one has time to review each and every data value in the chart when looking through medical history.
That aspect is not even a new AI thing to justify AI, that's just how it works today. Each doctor or nurse who works with a patient for a given visit puts in a little paragraph or two summary of everything they did and their plans for future care, and maybe they also import some of the key data points they consider directly relevant. And while I think this shouldn't be the case, a lot of this can happen hours or even days after that visit is over.
A lot of that work can be streamlined or even made more accurate using an LLM where the only data set referenced is the data that exists in the chart. Not hallucinatory ChatGPT garbage, but something more airgapped and tailored to that specific purpose.
You can't make an LLM only reference the data it's summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it's choosing whatever piece of that data seems most likely given the existing text in its context window. If there's not a huge corpus of training data, it won't have a model of English and won't know how to summarise text, and even restricting the training data to medical notes will stop mean it's potentially going to hallucinate something from someone else's medical notes that's commonly associated with things in the current patient's notes, or it's going to potentially leave out something from the current patient's notes that's very rare or totally absent from its training data.
Well, I can't claim to be an expert on the subject, at any rate, but there are plenty of models which are local-only and are required to directly reference the information they interpret. I'd assume a HIPAA-compliant model would need to be more like an airgapped NotebookLM than ChatGPT.
But I would also assume the risk of hallucinations or misinterpretations is the reason why a clinician would still need to review the AI summary to add/correct details before signing off on everything, so there's probably still some risk. Whether that risk of error is greater or less than an overworked resident writing their notes a couple days after finishing a 12-hour shift is another question, though.
If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.
For what it's worth, I don't mean to say that this is something that hospitals and health networks should be doing per se, but that they are doing right now. I'm sure it has benefits for them, as another user somewhere further in this post described, otherwise I don't think all these doctors would be so eager to use it.
I work for a non-profit which connects immigrants and refugees to various services, among them being healthcare. I don't know all of the processes they use when it comes to LLM-assisted documentation, but I'd like to think they have some protocols in place to help preserve accuracy. If they don't, that's why this is on our radar, but so is malpractice in general (which is thankfully rare here, but it does happen).