this post was submitted on 12 Feb 2026
52 points (90.6% liked)

Not The Onion

20313 readers
960 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

The idea of machines that can build even better machines sounds like sci-fi, but the concept is becoming a reality as companies like Cadence tap into generative AI to design and validate next-gen processors that also use AI.

In the early days of integrated circuits, chips were designed by hand. In the more than half a century since then, semiconductors have grown so complex and their physical features so small that it's only possible to design chips using other chips. Cadence is one of several electronic design automation (EDA) vendors building software for this purpose.

Even with this software, the process of designing chips remains time-consuming and error-prone. But with the rise of generative AI, Cadence and others have begun exploring new ways to automate these processes.

you are viewing a single comment's thread
view the rest of the comments
[–] FaceDeer@fedia.io 0 points 10 hours ago (2 children)

Eventually it will surpass human intelligence, but that seems decades away

Meanwhile, one of yesterday's headlines is about Google's latest AI system Aletheia having autonomously solved various math theorems that humans haven't been able to crack.

I think this might be coming faster than you think.

[–] Senal@programming.dev 1 points 10 hours ago (1 children)

Measures of intelligence are all iffy at best, but I’m pretty sure "being better at raw math" isn't a good one in isolation, especially seeing as that has been the case for a very long time.

CPU's and GPU's are basically just doing really fast math repeatedly.

That aside i'd, challenge you to find a universally accepted definition of "human intelligence" that works as a benchmark we can also use to measure machine intelligence.

afaik, we're still murky on whether or not we are just really efficient specialised computers working with electric meat instead of electric stone.

The term normally used when talking about MI that is similar enough to human intelligence is AGI and even then, there's not consensus on what that actually means.

[–] FaceDeer@fedia.io 1 points 10 hours ago (1 children)

This sounds like the AI effect at work. Google's got an AI that's autonomously generating novel publishable scientific results and now that's dismissed as them being just "good at math."

The term normally used when talking about MI that is similar enough to human intelligence is AGI and even then, there's not consensus on what that actually means.

The root article that this thread is about isn't about AGI at all, though. It's about an AI that's doing computer chip design.

[–] Senal@programming.dev 2 points 9 hours ago* (last edited 8 hours ago) (1 children)

This sounds like the AI effect at work. Google’s got an AI that’s autonomously generating novel publishable scientific results and now that’s dismissed as them being just “good at math.”

I can see why it might seem that way from the small reply i gave, but contextually it was in response to you referencing a maths specific problem.

I also went out of my way to specifically raise the same points as in that link, wrt to "intelligence" measurements and definitions.

I wasn't advocating for one way or the other, just pointing out that (afaik) we don't currently have a good way of defining or measuring either kind of intelligence, let alone a way to compare them [*].

So timelines on when one will surpass the other by any objective measurements are moot.

[*] Comparisons on isolated tasks is possible and useful in some contexts,but not useful in a general measurement sense without an actual idea of what we should be measuring.

As in, you can measure which vehicle is heavier, but in a context of "Which of these is more red" , weight means nothing.

The root article that this thread is about isn’t about AGI at all, though. It’s about an AI that’s doing computer chip design.

You yourself quoted a response with the phrase "human intelligence" in an ML based context.

I was clearly replying to your comment and not the article itself.

[–] FaceDeer@fedia.io 1 points 8 hours ago

That phrase was not quoted in my response.

[–] OpenStars@piefed.social 1 points 10 hours ago

It might at that. Though there will also be a lag time where even after it comes, people have become so inured by the past lies that they are slow to adapt. And hallucinations still exist, especially in the cheaper models where significantly fewer than 2^8 compute cycles are expended to answer the equivalent of a random Google search query.

It would also help if humans were precise. General "AI" in the sense of movies (such as the one I showed a picture to, in the first panel) do not exist. But LLMs do.