4
submitted 1 year ago* (last edited 1 year ago) by noneabove1182@sh.itjust.works to c/localllama@sh.itjust.works

Text from them:

Calling all model makers, or would-be model creators! Chai asked me to tell you all about their open source LLM leaderboard:

Chai is running a totally open LLM competition. Anyone is free to submit a llama based LLM via our python-package ๐Ÿ It gets deployed to users on our app. We collect the metrics and rank the models! If you place high enough on our leaderboard you'll win money ๐Ÿฅ‡

We've paid out over $10,000 in prizes so far. ๐Ÿ’ฐ

Come to our discord and check it out!

https://discord.gg/chai-llm

Link to latest board for the people who don't feel like joining a random discord just to see results:

https://cdn.discordapp.com/attachments/1134163974296961195/1138833170838589471/image1.png

top 7 comments
sorted by: hot top controversial new old
[-] moreeni@lemm.ee 11 points 1 year ago* (last edited 1 year ago)

Me at first: wow, that's cool, I wonder how models are ranked

Come to our discord and check it out!

OK, bye

[-] noneabove1182@sh.itjust.works 3 points 1 year ago* (last edited 1 year ago)

lmao a reasonable request, I'm pretty disappointed they don't have it hosted anywhere..

here's a link to their latest image of the leaderboard for what it's worth:

https://cdn.discordapp.com/attachments/1134163974296961195/1138833170838589471/image1.png

[-] korewa@reddthat.com 2 points 1 year ago

How is it ranked Iโ€™m not familiar with any of those except Wizard

[-] moreeni@lemm.ee 2 points 1 year ago

TYVM, OP :)

Wizard is at the top of every leaderboard I saw so far, I should really check it out

[-] noneabove1182@sh.itjust.works 2 points 1 year ago

There's apparently a pip command to display the leaderboard, if this ends up being of interest to people I could make a post and just update it every so often with the latest leaderboard

At least (as far as I can tell) they appear to be ranking the models by human evaluation rather than "benchmarks", which is closer to measuring the real-world performance.

It would be interesting to consider the types of questions that users are posing. For example there is a difference between asking:

  • A surface-level fact-based question such as "what is ..."

  • A creative question like "write a story/article about ..." or "give me a list of possible talking points for a presentation on ..."

  • A question about reasoning/understanding like "why do you think the word ... is more popular than ... when referring to ..." or "explain why ... is considered socially acceptable while ... is not"

  • Anything coding-related

Also, some models seem to do well at things that can be answered after one or two replies, but struggle to follow an argument if you try to go more in-depth or continue a conversation about a topic.

[-] noneabove1182@sh.itjust.works 1 points 1 year ago

Yeah it's a step in the right direction at least, though now that you mention it doesn't lmsys or someone do the same with human eval and side by side comparisons?

It's such a tricky line to walk between deterministic questions (repeatable but cheatable) and user questions (real world but potentially unfair)

this post was submitted on 09 Aug 2023
4 points (75.0% liked)

LocalLLaMA

2191 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS