Lemdro.id

2,403 readers
12 users here now

Our Mission ๐Ÿš€

Lemdro.id strives to be a fully open source instance with incredible transparency. Visit our GitHub for the nuts and bolts that make this instance soar and our Matrix Space to chat with our team and access the read-only backroom admin chat.

Community Guidelines

We believe in maintaining a respectful and inclusive environment for all members. We encourage open discussion, but we do not tolerate spam, harassment, or disrespectful behaviour. Let's keep it civil!

Get Involved

Are you an experienced moderator, interested in bringing your subreddit to the Fediverse, or a Lemmy app developer looking for a home community? We'd be happy to host you! Get in touch!

Quick Links

Lemdro.id Interfaces ๐ŸชŸ

Our Communities ๐ŸŒ

Lemmy App List ๐Ÿ“ฑ

Chat and More ๐Ÿ’ฌ

Instance Updates

!lemdroid@lemdro.id

founded 2 years ago
ADMINS
1
26
submitted 5 months ago* (last edited 5 months ago) by Xylight to c/localllama@sh.itjust.works
 
 

Benchmarks look pretty good, even better than some of the text only models, make sure to take them with a grain of salt tho

Benchmarks

Qwen3 VL 30b a3b (No Thinking)

Visual benchmarks for Qwen3 VL 235 A22B (Thinking)

2
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/comfyui_user_999 on 2025-11-01 04:05:18+00:00.


Day-old news for anyone who watches r/localllama, but llama.cpp merged in support for Qwen's new vision model, Qwen3-VL. It seems remarkably good at image interpretation, maybe a new best-in-class for 30ish billion parameter VL models (I was running a quant of the 32b version).

view more: next โ€บ