this post was submitted on 01 May 2026
160 points (98.2% liked)

Technology

84274 readers
3022 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TwoTiredMice@feddit.dk 12 points 13 hours ago (1 children)

Aren't people horrified to give a hallucinatory program full access to your computer?

No, but should they? Yes.

It's a privacy nightmare and the risk of something going wrong is quite high.

But, it is also a very interesting piece of software. I haven't tried it out yet, and I am not sure I will, but I do get why people use it.

[–] partofthevoice@lemmy.zip 5 points 13 hours ago* (last edited 13 hours ago) (2 children)

Honestly, it’s a weird position. On one hand, I despise the popular ideas behind it. Complete lack of concern for security, governance, workflow, … it’s like a stack of toddlers in a trench coat, acting like professionals.

On the other hand, I’m rather convinced that there’s a “right way.” What if I implemented a swarm of agents to do mundane tasks, sandboxed them, only gave them read-access to non-sensitive assets, gave them write access to only secure, version controlled locations… maybe I let them push code into repositories, but only under feature branches. …

I imagine there has to be a way to actually use this tool professionally. Something sobering, not drunk on AI kool-aid. Yet still, it’s demotivating given the cloud of bullshit surrounding the topic right now.

[–] TwoTiredMice@feddit.dk 10 points 12 hours ago* (last edited 12 hours ago) (1 children)

What I like about, I think, is the private assistance feature, but I can achieve that with other solutions, I wouldn't need OpenClaw for that. But I don't think I will go that way anytime soon. I think it will stress me too much.

I am using AI for development daily. I describe an issue or feature to an agent via a skill and it returns a set of tasks in a structured and validated json format, then I run that json file through a python project I have created, looping through each task one at a time, and then I have my python code to structure how my agent is working. Each step is deterministic with short bursts of AI delulu, that again is validated against deterministic steps in pure python. It works quite good and each feature/task is approached in the exact same way where only the in between AI delulu deviates from previous runs, but it makes it much nicer, when you have something you trust in between what the AI is doing.

[–] partofthevoice@lemmy.zip 1 points 12 hours ago (1 children)

See, now that sounds pretty cool. It sounds like an automated discovery and work harness. I want to build something like that.

I imagine a huge ecosystem of tools. It only requires one person to build it, then surely it can be open sourced right?

I imagine a SKILL.md repository, alongside ability to specify SKILL dependancies on a project-basis. I imagine vector cache layers, version controls, snapshots for swarm state, …

Honestly, I’d love to experiment with different architectures for compositing swarms of agents. Curious how different designs might behave holistically. To include, different paradigms for sharing state between nodes in a swarm.

I also can’t help but feel like there has to be more efficient ways for models to talk to each other than in natural language. If they’re training on the same dataset, why can’t they talk in tokens for example? The human brain doesn’t need to communicate in natural language when the amygdala and prefrontal cortex are having a dispute.

[–] pemptago@lemmy.ml 4 points 11 hours ago

If you're interested, the linux unplugged podcast had a recent episode on their experiences. I've never loved the tone of "hey, look, this is inevitable" when it comes to ai, but I can see its utility when well-scoped with conservative permissions and oversight vs letting it loose or vibe coding. Now if only hardware wasn't artificially inflated I might think it was worth dabbling locally.

[–] frongt@lemmy.zip 3 points 13 hours ago (2 children)

If you spend that much effort, you might just do it without AI. Same amount of work, and you know it's not going to have non-deterministic behavior.

[–] yucandu@lemmy.world -1 points 9 hours ago (1 children)

without AI. Same amount of work

You want me to write an entire library for a brand new sensor that just came off the market, by parsing through and reading a hundred page datasheet manual, understanding i2c or SPI communication timings, configuration packets, etc...

When I can just drag and drop the PDF into ChatGPT and say "make a library for this sensor" and it spits out something that has been working without issue for the past 2 years?

Why? Why would I be that stupid?

[–] Miaou@jlai.lu 1 points 7 hours ago

I hear crazy claims like this but haven't seen anything close to this with my own eyes (yet).

I shudder at the idea that SPI or i2c are considered complex for someone supposed to interact with hardware. What will you do if a problem arises and you don't even know which pin does what?

[–] partofthevoice@lemmy.zip 1 points 12 hours ago* (last edited 10 hours ago)

Well, I’d be spending that work on a re-usable platform / framework. So if the argument is “it’s as much work as doing the work yourself anyway,” then I think it may be worth it.

Same argument we had for building the SQL engine. It’s a lot of work upfront but maybe we can benefit from its functionality for long after that.

I wouldn’t be building a project-scoped work harness. I’d be building a work harness for projects.

Edit: downvote me all you want. The comparison to the SQL engine was a good one. It’s about increasing the baseline of readily-available information, boiler-plate, test data, POCs… between the times (T1) that I have an idea and (T2) that I’m ready to start working on that idea. It’s not about having the agent do the work. Not at all.