320
you are viewing a single comment's thread
view the rest of the comments
[-] pixxelkick@lemmy.world 11 points 1 month ago

To be honest, the one thing that LLMs actually are good at, is summarizing bodies of text.

Producing a critique of a manuscript isnt actually to far out for an LLM, it's sorta what it's always doing, all the time.

I wouldn't classify it as something to use as concrete review, and one must also keep in mind that context windows on LLMs usually are limited to only thousands of tokens, so they can't even remember anything more then like 5 pages ago. If your story is bigger than that, they'll struggle to comment on anything before the last 5 or so pages, give or take.

Asking an LLM to critique a manuscript is a great way to get constructive feedback on specific details, catch potential issues, maybe even catch plot holes, etc.

I'd absolutely endorse it as a step 1 before giving it to an actual human, as you likely can substantially improve your manuscript by iterating over it 3-4 times with an LLM, just covering basic issues and improvements, then letting an actual human focus on the more nuanced stuff an AI would miss/ignore.

[-] douglasg14b@lemmy.world 37 points 1 month ago* (last edited 1 month ago)

LLMs cannot provide critique

They can simulate what critique might look like by way of glorified autocomplete. But it cannot actually provide critique, because they do not reason, they do not critically think. They match their outputs based upon the most statistically likely interpretation of the input in what you could think of as essentially a 3D word cloud.

Any critique that you get from an llm is going to be extremely limited and shallow (And there's for the critical critique you require). The longer your text the less likely the critique that you receive is going to be relevant to the depth in which it may be needed.

It's good for finding mistakes, it's good for paraphrasing, it's good for targeting. It cannot actually critique, which requires a level of consideration that is impossible for LLMs today. There's a reason why text written by llms tends to have distinguishing features, or lack of, that's a bland statistically generated amalgamation of human writing. It's literally a "common denominator" generator.

[-] pixxelkick@lemmy.world 0 points 1 month ago

This continues to boil down into that tired argument that an amalgamation of human behavior is distinct from how humans actually behave, but since no one can actually prove how humans produce thoughts, it follows you can't actually prove that an LLM actually works or doesn't work any different.

So I dont really dig into that argument.

this post was submitted on 22 Sep 2024
320 points (97.6% liked)

Facepalm

2325 readers
1 users here now

founded 1 year ago
MODERATORS