this post was submitted on 22 Mar 2026
-2 points (42.9% liked)

No Stupid Questions

3998 readers
58 users here now

There is no such thing as a Stupid Question!

Don't be embarrassed of your curiosity; everyone has questions that they may feel uncomfortable asking certain people, so this place gives you a nice area not to be judged about asking it. Everyone here is willing to help.


Reminder that the rules for lemmy.ca still apply!


Thanks for reading all of this, even if you didn't read all of this, and your eye started somewhere else, have a watermelon slice ๐Ÿ‰.


founded 3 years ago
MODERATORS
 

It would be interesting to watch for sure but I wonder if they might correct each other or collaborate in some way that could be lightly supervized to produce an ouput

top 8 comments
sorted by: hot top controversial new old
[โ€“] Kolanaki@pawb.social 2 points 7 hours ago

That's one of the "benefits" of Character.AI.

The only thing I found it good for was playing D&D by making the AI characters my PCs. The fact it would fuck shit up all the time just made it feel like playing with real people that also don't fully understand the rules or interpret the rules different.

[โ€“] taldennz@lemmy.nz 4 points 15 hours ago (1 children)

A conversation/collaboration... not really.

You can create a 'swarm' of agents with differing roles, define different roles and phases, to have it iterate on a problem.

  1. Investigations to discover and provide a condensed context of discoveries
  2. Use this to propose a plan
  3. Some number of iterations of one or more reviewing agents (you can give each an area of focus) criticising the plan, one or more agents to propose improvements based on the reviews, and an agent to review and apply the proposals before the next iteration

Groups of agents of the same role, operating in parallel, should ideally be using different models (or have context that gives them differing goals - eg focused on maintainable abstractions, security, scalability, test case identification, etc).

The implementation can do a similar thing - a code generator followed by reviewers, proposals for action, and then apply improvements... and you can iterate on testing or benchmarking too, all before hand-over.

This can improve results (at a non-trivial cost sometimes, so budgets are important) and it will still miss sometimes. You can help it of course with hints, directions or even implementations or stubs of implementations of abstractions you expect.

[โ€“] HubertManne@piefed.social 1 points 8 hours ago

I love your reply and I just want to add for @cheese_greater@lemmy.world that the llms don't understand their output or your input or in this case the input and output of the various "conversing" llms. eseentialy they can't converse or colloborate in the way we do but what @taldennz@lemmy.nz has is right and being used by at least some of the models although I assume by now all are doing something like it.

[โ€“] RegularJoe@lemmy.world 5 points 16 hours ago
[โ€“] the_abecedarian@piefed.social 4 points 15 hours ago (2 children)

what you're proposing requires them to reason and understand each other. LLMs don't do that, they take text input and construct an output based on words (tokens) that they have mapped to be close to the ones you entered into your prompt.

it's a clever way to produce a plausible response, but it's not thinking or reasoning.

[โ€“] BCsven@lemmy.ca 1 points 9 hours ago (1 children)

I'm not sure most people are thinking or reasoning either ๐Ÿ˜€

Didn't you argue that deep learning models were able to think like humans and then used this exact line in your arguments? Like a month or two ago?

[โ€“] taldennz@lemmy.nz 1 points 10 hours ago

Absolutely. It is not thinking in the same way we do.

Putting aside the planning orchestrator and focusing just on the LLM.

  • Input tokens - context + prompt
  • Output tokens - the best matches to the statistical model for your inputs

The agent can do this in stages to try and decide what the complete set of input tokens should be, and at what point to stop trying to get more output tokens.

You can use the orchestrator approach to then try to get other models to validate the outcome and refine it - but it's all just prodding the statistical model.

This talk was interesting. I'm a lot less enthusiastic on the topic than the speaker... but this is closer to how I think the industry can see a net gain from AI - before the slop-errors from taking expertise out of the loop hits critical mass.