this post was submitted on 09 Aug 2023
4 points (75.0% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Text from them:

Calling all model makers, or would-be model creators! Chai asked me to tell you all about their open source LLM leaderboard:

Chai is running a totally open LLM competition. Anyone is free to submit a llama based LLM via our python-package ๐Ÿ It gets deployed to users on our app. We collect the metrics and rank the models! If you place high enough on our leaderboard you'll win money ๐Ÿฅ‡

We've paid out over $10,000 in prizes so far. ๐Ÿ’ฐ

Come to our discord and check it out!

https://discord.gg/chai-llm

Link to latest board for the people who don't feel like joining a random discord just to see results:

https://cdn.discordapp.com/attachments/1134163974296961195/1138833170838589471/image1.png

you are viewing a single comment's thread
view the rest of the comments
[โ€“] micheal65536@lemmy.micheal65536.duckdns.org 4 points 1 year ago (1 children)

At least (as far as I can tell) they appear to be ranking the models by human evaluation rather than "benchmarks", which is closer to measuring the real-world performance.

It would be interesting to consider the types of questions that users are posing. For example there is a difference between asking:

  • A surface-level fact-based question such as "what is ..."

  • A creative question like "write a story/article about ..." or "give me a list of possible talking points for a presentation on ..."

  • A question about reasoning/understanding like "why do you think the word ... is more popular than ... when referring to ..." or "explain why ... is considered socially acceptable while ... is not"

  • Anything coding-related

Also, some models seem to do well at things that can be answered after one or two replies, but struggle to follow an argument if you try to go more in-depth or continue a conversation about a topic.

Yeah it's a step in the right direction at least, though now that you mention it doesn't lmsys or someone do the same with human eval and side by side comparisons?

It's such a tricky line to walk between deterministic questions (repeatable but cheatable) and user questions (real world but potentially unfair)