this post was submitted on 27 Mar 2026
187 points (97.0% liked)

Technology

83098 readers
2790 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, "Reasoning" models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what's next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

top 50 comments
sorted by: hot top controversial new old
[–] Tetragrade@leminal.space 1 points 38 minutes ago* (last edited 35 minutes ago)

This replay is the funniest shit lmao. Keep building that bridge Claude.

https://arcprize.org/replay/0964128b-a2f5-4c5b-886e-497d893f429d

Interesting that it seems to be perceiving the environment mostly accuracy, and is just completely wrong about the purpose of all the game objects.

[–] WorldsDumbestMan@lemmy.today 1 points 59 minutes ago

I'm not sure such a general term is factual.

I doubt I can adapt 100%

[–] UnrepentantAlgebra@lemmy.world 5 points 3 hours ago (5 children)

If human scores were included, they would be at 100%, at the cost of approximately $250

Wait, why did it cost real humans $250 to pass the test?

[–] KairuByte@lemmy.dbzer0.com 7 points 2 hours ago

I assume it’s an hourly wage or something. Just because humans can work for free if they choose, doesn’t mean they have no cost associated with them. Just like a company could choose to give away unlimited tokens, those tokens still have a standard cost.

[–] aesopjah@sh.itjust.works 1 points 51 minutes ago

it's also an odd metric since only 20-60% of the humans completed it. Very 60% of the time they complete it everytime energy.

Ideally they'd run the bots multiple times through (with no context or training of previous run), but I guess that is cost prohibitive?

[–] FrankFrankson@lemmy.world 4 points 2 hours ago (1 children)

Thatvis how much individual testing humans cost when you buy them in bulk.

[–] Aceticon@lemmy.dbzer0.com 1 points 55 minutes ago

If there had been a "Buy 10, Get 1 free" they could've used 11 humans instead of 10 for the same $250.

[–] mapleseedfall@lemmy.world 3 points 2 hours ago

Youd have to eat $250 worth of burgers to pass it.

[–] ExLisper@lemmy.curiana.net 1 points 2 hours ago

Because I ain't doing this shit for free.

[–] GreatBlueHeron@lemmy.ca 16 points 4 hours ago (1 children)

It's fun to point at the crappy performance of current technology. But all I can think about is the amount of power and hardware the AI bros are going to burn through trying to improve their results.

[–] partofthevoice@lemmy.zip 1 points 19 minutes ago* (last edited 19 minutes ago)

Funnier yet will be if they continue to just train the model on that particular kind of test, invalidating its results in the process.

[–] General_Effort@lemmy.world 2 points 2 hours ago (1 children)

ARC-AGI-3

What happened to ARC-AGI-1 and -2?

[–] VAK@lemmy.world 2 points 1 hour ago

AI won them

[–] HaunchesTV@feddit.uk 32 points 7 hours ago (1 children)

Grok Reasoning: 0%

Hilarious

[–] brsrklf@jlai.lu 22 points 4 hours ago* (last edited 4 hours ago)

Reasoning is woke propaganda anyway.

[–] ExLisper@lemmy.curiana.net 15 points 7 hours ago

Can't wait for this to be the new captcha.

[–] RustyShackleford@piefed.social 30 points 10 hours ago (4 children)

As a psychiatrist, I have a theory about what’s missing in AI. First, it lacks childhood dependency and attachments. Second, it struggles to overcome repeated pain and suffering. Third, it lacks regular eating and restroom breaks. Fourth, it struggles to accept loss in everyday situations. Finally, it lacks the concept of our inevitable death. Without these nagging memories and concepts, machines will simply revert to the simpler concepts we use them for in our recent times, such as stealing cryptocurrency. After all, we live in a world run by capitalism, so it’s only logical. ¯\(ツ)

[–] CosmicTurtle0@lemmy.dbzer0.com 53 points 7 hours ago (5 children)

As a technologist, I have to remind everyone that AI is not intelligence. It's a word prediction/statistical machine. It's guessing at a surprisingly good rate what words follow the words before it.

It's math. All the way down.

We as humans have simply taken these words and have said that it is "intelligence".

[–] Iconoclast@feddit.uk 2 points 42 minutes ago

Few of countless dictionary definitions for intelligence:

  • The ability to acquire, understand, and use knowledge.
  • The ability to learn or understand or to deal with new or trying situations
  • The ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)
  • The act of understanding
  • The ability to learn, understand, and make judgments or have opinions that are based on reason
  • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

There isn't even concensus on what intelligence actually means yet here you are declaring "AI is not intelligence" what ever that even means.

Artificial Intelligence is a term in computer science that describes a system that's able to perform any task that would normally require human intelligence. Atari chess engine is an intelligent system. It's narrowly intelligent as opposed to humans that are generally intelligent but it's intelligent nevertheless.

[–] Earthman_Jim@lemmy.zip 1 points 1 hour ago

It's something like folks calling a mirror intelligent.

[–] TherapyGary@lemmy.dbzer0.com 0 points 1 hour ago

As a therapist, I can tell you the only thing holding LLMs back from true intelligence is having to pee and poop. Peeing and pooping is the foundation of all higher level operations. I poured water on my PC and the LLM I was running said "I think" right before committing suicide

[–] unpossum@sh.itjust.works 20 points 7 hours ago (3 children)

As another technologist, I have to remind everyone that unless you subscribe to some rather fringe theories, humans are also based on standard physics.

Which is math. All the way down.

[–] NewOldGuard@lemmy.ml 5 points 1 hour ago

As a mathematician, it should be noted that the mathematics of physics aren’t laws of the universe, they are models of the laws of the universe. They’re useful for understanding and predicting, but are purely descriptive, not prescriptive. And as they say, all models are wrong, but some are useful

[–] HereIAm@lemmy.world 9 points 6 hours ago

I agree, the maths argument is not a good one. While a neural network is perhaps closer to what a brain is than just a CPU (or a clock, as it was compared to in he olden days), it would be a very big mistake to equate the two.

[–] xep@discuss.online 3 points 7 hours ago (2 children)

What maths do our memories follow? What about consciousness?

[–] Iconoclast@feddit.uk 1 points 36 minutes ago

Consciousness (the fact of experience) doesn't necessarily need to be linked to intelligence. It might be but it doesn't have to. An LLM is almost definitely more intelligent than an insect but it most likely is like nothing to be an LLM but it probably is like something to be an insect.

[–] xploit@lemmy.world 5 points 6 hours ago (1 children)

Obligatory xkcd... we're just meatbags somewhere to the left Purity

On a more serious note, there's plenty to explore there and there are some potentially interesting links to quantum physics and stuff in our brain, as well as how certain drugs can completely disrupt our consciousness (ever had an operation?) and how it could link up. But there is obviously no definitive answer.

At best consciousness is whatever flavour of philosophical interpretation/explanation you like at any given time.

[–] wonderingwanderer@sopuli.xyz 2 points 3 hours ago

Philosopher: looks at the mathematician...

[–] silverneedle@lemmy.ca 10 points 6 hours ago

As someone who knows a thing or two about biology I think LLMs strip away >90% of what makes animals think.

[–] msage@programming.dev 10 points 7 hours ago

Are you anthromorphizing word suggester into a being experiencing things?

[–] MagicShel@lemmy.zip 5 points 7 hours ago

The major thing AI lacks is continuous parallel "prompting" through a variety of channels including sensory, biofeedback, and introspection / meta-thought about internal state and thinking.

AI currently transforms a given input into an output. However it cannot accept new input in the middle of an output. It can't evaluate the quality of its own reasoning except though trial and error.

If you had 1000 AIs operating in tandem and fed a continuous stream of prompts in the form of pictures, text, meta-inspection, and perhaps a simulation of biomechanical feedback with the right configuration, I think it might be possible to create a system that is a hell of an approximation of sentience. But it would be slow and I'm not sure the result would be any better than a human — you'd introduce a lot of friction to the "thought" process. And I have to assume the energy cost would be pretty enormous.

In the end it would be a cool experiment to be part of, but I doubt that version would be worth the investment.

[–] ExFed@programming.dev 4 points 8 hours ago (3 children)

It could also be that it lacks the machinery to feel any emotions at all. You don't (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don't (normally) have to train people to have empathy or compassion.

I argue that our obsession with AI is, itself, a misalignment with our environment; it disproportionately tickles psychological reward centers which evolved under unrecognizably different circumstances.

[–] Havoc8154@mander.xyz 1 points 3 hours ago (1 children)

I guess you don't have children.

You absolutely do have to train them to be afraid of bears, heights, and every fucking thing you can imagine. You absolutely do have to teach them empathy and compassion. There may be some nugget of instinct, but without reinforcement it might as well not exist.

[–] ExFed@programming.dev 1 points 3 hours ago

Hah, okay, you got me there. From my understanding, though, that's mostly because kids are still figuring out what's "normal", so their fear instinct isn't nearly as strong. I guess I should've stuck to the more instinctive sources of fear...

Regardless, that's not really my point. My point is an LLM doesn't rely on machinery in the same way that a human brain does. That doesn't make AI "worse" or "better" overall, but it does make it an awful replacement for other humans.

load more comments (2 replies)
[–] tatterdemalion@programming.dev 0 points 3 hours ago* (last edited 49 minutes ago) (1 children)

LLMs might suck at this game but I'm pretty sure Deepmind's deep reinforcement learning AI could solve these easily.

EDIT: I know you guys hate AI around here, but you need to at least be aware of what the technology is capable of.

From 11 years ago:

https://youtu.be/V1eYniJ0Rnk

[–] 33550336@lemmy.world 3 points 3 hours ago (1 children)
[–] tatterdemalion@programming.dev 0 points 3 hours ago

Wdym? It's existed for at least a decade. Plenty of papers about it. It mastered Atari and Mario. It became the best Go player.

[–] Multiplexer@discuss.tchncs.de 11 points 10 hours ago

Link to the recent Al Explained video mainly covering ARC-AGI-3:
https://www.youtube.com/watch?v=s4tptozUJ8Y

load more comments
view more: next ›