this post was submitted on 27 Mar 2026
240 points (96.9% liked)

Technology

83140 readers
12 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, "Reasoning" models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what's next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

you are viewing a single comment's thread
view the rest of the comments
[–] UnrepentantAlgebra@lemmy.world 10 points 2 days ago (6 children)

If human scores were included, they would be at 100%, at the cost of approximately $250

Wait, why did it cost real humans $250 to pass the test?

[–] KairuByte@lemmy.dbzer0.com 8 points 2 days ago

I assume it’s an hourly wage or something. Just because humans can work for free if they choose, doesn’t mean they have no cost associated with them. Just like a company could choose to give away unlimited tokens, those tokens still have a standard cost.

[–] FrankFrankson@lemmy.world 6 points 2 days ago (1 children)

Thatvis how much individual testing humans cost when you buy them in bulk.

[–] Aceticon@lemmy.dbzer0.com 1 points 2 days ago

If there had been a "Buy 10, Get 1 free" they could've used 11 humans instead of 10 for the same $250.

[–] aesopjah@sh.itjust.works 4 points 2 days ago (1 children)

it's also an odd metric since only 20-60% of the humans completed it. Very 60% of the time they complete it everytime energy.

Ideally they'd run the bots multiple times through (with no context or training of previous run), but I guess that is cost prohibitive?

[–] monotremata@lemmy.ca 3 points 2 days ago

Yeah, this is what I was going to call out. Calling it "100% solvable by humans" and saying "if human scores were included, they would be at 100%" when 20-60% of humans solved each task seems kinda misleading. The AI scores are so low that I don't think this kind of hyperbole is necessary; I assume there are some humans that scored 100%, but I would find it a lot more useful if they said something like "the worst-performing human in our sample was able to solve 45% of the tasks" or whatever. Given that the AIs are still scoring below 1%, that's still pretty dark.

[–] mapleseedfall@lemmy.world 4 points 2 days ago

Youd have to eat $250 worth of burgers to pass it.

[–] brianpeiris@lemmy.ca 3 points 2 days ago* (last edited 2 days ago)

This is my rough upper-bound estimate based on the Technical Report. Human participants were paid to complete and evaluate the tasks at an average fixed fee of $128 plus $5 for solved tasks. So if a panel of humans were tasked with solving the 25 tasks in the public test set, it would be an average of $250 per person. Although, looking at it again, the costs listed for the LLMs is per task, so it would actually be more like $10 per human per task. In any case it's one or two orders of magnitude less than the LLMs.

Participants received a fixed participation fee of $115–$140 for completing the session, along with a $5 performance-based incentive for each environment successfully solved

https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

[–] ExLisper@lemmy.curiana.net 3 points 2 days ago

Because I ain't doing this shit for free.