I mean, mass is a massive factor for flight.
If you ever fly on a small plane, they ask your weight, and cap your luggage weight because they absolutely have to.
I mean, mass is a massive factor for flight.
If you ever fly on a small plane, they ask your weight, and cap your luggage weight because they absolutely have to.
Actually, I prefer the tight packing. I wanna get from A to B, and have you seen how expensive tickets are?
They should really be using wide body flying/blended wings so the cabin is more spacious for the same max payload, but that’s a separate matter.
That being said, I think airlines should mix in a few spacier seats, for big/tall people, for a small, markup, and exclude all the business class extras.
I’ll save you the searching!
For max speed when making parallel calls, vllm: https://hub.docker.com/r/btbtyler09/vllm-rocm-gcn5
Generally, the built in llama.cpp server is the best for GGUF models! It has a great built in web UI as well.
For a more one-click RP focused UI, and API server, kobold.cpp rocm is sublime: https://github.com/YellowRoseCx/koboldcpp-rocm/
If you are running big MoE models that need some CPU offloading, check out ik_llama.cpp. It’s specifically optimized for MoE hybrid inference, but the caveat is that its vulkan backend isn’t well tested. They will fix issues if you find any, though: https://github.com/ikawrakow/ik_llama.cpp/
mlc-llm also has a Vulcan runtime, but it’s one of the more… exotic LLM backends out there. I’d try the other ones first.
It’s great! Combat goes faster with players making moves in parallel (where possible).
Impulse offer: I’ve been pondering replaying BG3, or at least trying it out. If you want a random Lemmy stranger to help finish a save with, I’m down.
Yeah.
An anecdote: AT&T was having a fire sale on the base iPhone 16 Plus, like so cheap that it must have been a loss. It didn’t make any sense to me, but an employee speculated that, since it was their worst selling model of the lineup, they were clearing the inventory and writing it off as a loss to compensate for some other transactions.
AFAIK some outputs are made with a really tiny/quantized local LLM too.
And yeah, even that aside, GPT 3.5 is really bad these days. It’s obsolete.
Space Station 14
Thanks, I will have to check this this out.
...You ever play Barotrauma?
They already have the enormous cost of production sunk though. I understand not paying for marketing, but projected profit goes from "negative" to "massively negative" if they don't at least license it out to streaming.
It's probably something tax related, but still.
I can’t bring myself to finish.
I have the same habit, but recently discovered it's apparently a neurodivergence symptom, heh.
Co-op with friends made me finish BG3 though.
"a lot of people around him did the same."
Your friend is in la la land.
I know a rich couple that would love to do this and 100% can't because it'd be ludicrously expensive, even with no kids. And that's in a place way cheaper than New York City.
...Maybe it was more practical when his parents were working, though?
Ah I missed this last time.