this post was submitted on 30 Aug 2025
33 points (100.0% liked)

SneerClub

1183 readers
53 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] zogwarg@awful.systems 2 points 3 hours ago

We have:

No more sycophancy—now the AI tells you what it believes. [...] We get common knowledge, which recently seems like an endangered species.

Followed by:

We could also have different versions of articles optimized for different audiences. The question is, how many audiences, but I think that for most articles, two good options would be “for a 12 years old child” and “standard encyclopedia article”. Maybe further split the adult audience to “layman” and “expert”?

You have got to love the consistency.

And the accidentally (or not so accidentally?) imperialistic:

The first idea is translation to languages other than English. Those languages often have fewer speakers, and consequently fewer Wikipedia volunteers. But for AI encyclopedia, volunteers are not a bottleneck. The easiest thing it could do is a 1:1 translation from the English version. But it could also add sources written in the other language, optimize the article for a different audience, etc.

And also a deep misunderstanding of translation, there is no such thing as 1:1 translation, it always requires re-interpretation.

[–] blakestacey@awful.systems 15 points 1 day ago (1 children)

And we don’t want to introduce all the complexities of solving disagreements on Wikipedia.

wait for it

There should also be some kind of support for multiple AIs disagreeing with each other.

[–] scruiser@awful.systems 5 points 13 hours ago

And we don’t want to introduce all the complexities of solving disagreements on Wikipedia.

What they actually mean is they don't want them to be solved in favor of the dgerad type of people... like (reviewing the expose on lesswrong)... demanding quality sources that aren't HBD pseudoscience journals or right wing rags.

[–] ignirtoq@fedia.io 18 points 1 day ago

Now, I don’t intend this to be some kind of “computers vs humans” competition; of course that wouldn’t be fair, considering that the computers can read and copy the human Wikipedia.

I like how he thinks a "computers vs humans" competition in generating an encyclopedia of knowledge (which necessitates true information) is unfair because AI has the advantage. They truly don't understand that chat bots don't have a concept of "fact" as we define it, so this task is impossible for LLMs.

[–] blakestacey@awful.systems 14 points 1 day ago (1 children)

From the comments:

I wonder if you could do something similar with all peer-reviewed scientific publications, summarizing all findings into an encyclopedia of all scientific knowledge.

True believers are fucked in the head.

[–] CinnasVerses@awful.systems 5 points 1 day ago (1 children)

The owner of Birdsite tweeted the same idea "make chatbots write a universal encyclopedia to free us from human experts" a year or so ago.

[–] blakestacey@awful.systems 6 points 18 hours ago (1 children)

Variations on this theme have probably come up repeatedly in promptfondler circles.

[–] dgerard@awful.systems 3 points 12 hours ago

and our very good friends and their predecessors have wanted to automate away the human decision bit since forever.

[–] jackr@lemmy.dbzer0.com 15 points 1 day ago (1 children)

also lmao @ one of the comments꧇

Maybe future versions of AI chatbots could use something like this as a shared persistent memory that all chatbot instances could reference as a common ground truth. The only trick would be getting the system to use sound epistemology and reliably report uncertainty instead of hallucinations.

This will fix all problems with AI if only we fix the fundamental flaw in the architecture guys!

[–] scruiser@awful.systems 5 points 13 hours ago

I keep seeing this sort of thinking on /r/singularity, people who are sure LLMs will be great once they have memory/ground-truth factual knowledge/some other feature that in fact the promptfarmers have already tried (and failed) to add via fancier prompting (i.e. RAG) or fine-tuning and would require a massive reinvention of the entire paradigm to actually fix. That, or they describe what basically amounts to a reinvention of the concept of expert systems like Cyc.

[–] zbyte64@awful.systems 7 points 1 day ago

Normally when you design products you think of your users needs but author is too brilliant for that and is designing products based on making a fanfic "will it blend?" episode

[–] Soyweiser@awful.systems 8 points 1 day ago* (last edited 1 day ago)

I have some thoughts on this plan. Say I make a new video game and use AI to write the articles for it. Where would the AI get the data from? And how do you prevent AI using slop as a source? (and just listing it as 'a problem' will not fix the problem, it undermines the fundamentals of your whole project).

Also "sounds kinda cool" really? Cool? You have seen the current crop of LLMs? Did you ask a LLM if this was a good idea first or something?

[–] a_wild_mimic_appears@lemmy.dbzer0.com 5 points 1 day ago* (last edited 1 day ago)

This... is a genuine thought, but that's all it has going for it. Not every thought is worth an essay.

But i actually read it halfway, just to humor the author.

[–] jackr@lemmy.dbzer0.com 5 points 1 day ago (1 children)

This might actually fit better in tech takes. If so, you can remove it here.

[–] dgerard@awful.systems 5 points 1 day ago

oh nah it's fine, LW is what it's for