So apparently for lemmy.world mods pointing out that the word "anti-semite" is far more used than "antigypsyism, anti-Romanyism, antiziganism, ziganophobia, or Romaphobia” even though the Nazis targetted both Jews and Roma in the Holocaust, is, somehow, "Criticizing Jewish people as a whole".
Or maybe it's the whole "I don't care about any one specific race, I care about people and think it's always unjusct when people are treated differently based on things they were born with, such as race" that was deemed "Criticizing Jewish people as a whole".
Good old lemmy.world: they were called on it repeatedly so eventually walked back on the whole "criticizing Israel is anti-semitic" but apparently if you don't go along with the view that racism against a very specific group is much worse than racism against people from other groups, then you must be against that specific ethnic group.
My comment in text for reference:
All clearly as frequently used as "anti-semitism" /s
And yeah, I don't care about race, any race, I care about people, which includes that they're not unjustly treated for things that were not their choice, such as the race they were born into.
It's Racists who feel the need to care about a race or races, defending things for some races which they do noit defend for others, doing little performances about how others must care about those races too and that those who don't "are against those races" - for them race comes first, defining a person and dictating how they should be treated.
For Humanists race is something that should be of as little importance to how somebody is treated as the color of their eyes or how tall they are, and yet they see again and again race weponized by Racists to treat people differently even though those people haven't actually earned such treatment through their actions: in other words race fro Humanists is something that should be irrelevant yet has been turned by others into a pivot for injustice.
It's pretty obvious from your little performance which one you are

As the post says:
Which make sense as they're a statistical text prediction engine and have no notion of what is important or not in a text, so unlike humans don't treat different parts differently depending on how important they are in that domain.
In STEM fields accuracy is paramount and there are things which simply cannot be dropped from the text when summarizing it, but LLMs are totally unable to guarantee that.
It's the same reason why code generated by LLMs almost never works without being reviewed and corrected - the LLM will drop essential elements so the code doesn't work as it should or won't even compile, but at least with code the compiler validates some of the accuracy of that text at compile time against a set of fixed rules, whilst the person reviewing the code knows upfront the intention for that code - i.e. what it was supposed to do - and can use that as a guide for spotting problems in the generated code.
One thing is summarizing a management meeting where most of the things said are vague waffle, with thing often repeated and were nobody really expects precision and accuracy (sometimes, quite the opposite) and hence a loss of precision is generally fine, a whole different think is summarizing things were at least some parts must be precise and accurate.
Personally I totally expect that LLMs fail miserable in areas requiring precision and accuracy, were a text statistical engine with a pretty much uniform error probability in terms of gravity of the error (i.e. just as likely to make a critical mistake as it is to make a minor mistake) will in summarizing just as easily mangle or drop elements in critical areas requiring accuracy as it does in areas which are just padding.