brucethemoose

joined 2 years ago
MODERATOR OF
[–] brucethemoose@lemmy.world 1 points 2 minutes ago

Ah I missed this last time.

[–] brucethemoose@lemmy.world 1 points 4 minutes ago* (last edited 3 minutes ago)

I mean, mass is a massive factor for flight.

If you ever fly on a small plane, they ask your weight, and cap your luggage weight because they absolutely have to.

[–] brucethemoose@lemmy.world 1 points 10 minutes ago* (last edited 5 minutes ago) (2 children)

Actually, I prefer the tight packing. I wanna get from A to B, and have you seen how expensive tickets are?

They should really be using wide body flying/blended wings so the cabin is more spacious for the same max payload, but that’s a separate matter.


That being said, I think airlines should mix in a few spacier seats, for big/tall people, for a small, markup, and exclude all the business class extras.

[–] brucethemoose@lemmy.world 1 points 17 minutes ago* (last edited 16 minutes ago)

I’ll save you the searching!

For max speed when making parallel calls, vllm: https://hub.docker.com/r/btbtyler09/vllm-rocm-gcn5

Generally, the built in llama.cpp server is the best for GGUF models! It has a great built in web UI as well.

For a more one-click RP focused UI, and API server, kobold.cpp rocm is sublime: https://github.com/YellowRoseCx/koboldcpp-rocm/

If you are running big MoE models that need some CPU offloading, check out ik_llama.cpp. It’s specifically optimized for MoE hybrid inference, but the caveat is that its vulkan backend isn’t well tested. They will fix issues if you find any, though: https://github.com/ikawrakow/ik_llama.cpp/

mlc-llm also has a Vulcan runtime, but it’s one of the more… exotic LLM backends out there. I’d try the other ones first.

[–] brucethemoose@lemmy.world 1 points 23 minutes ago

It’s great! Combat goes faster with players making moves in parallel (where possible).

Impulse offer: I’ve been pondering replaying BG3, or at least trying it out. If you want a random Lemmy stranger to help finish a save with, I’m down.

[–] brucethemoose@lemmy.world 1 points 34 minutes ago

Yeah.

An anecdote: AT&T was having a fire sale on the base iPhone 16 Plus, like so cheap that it must have been a loss. It didn’t make any sense to me, but an employee speculated that, since it was their worst selling model of the lineup, they were clearing the inventory and writing it off as a loss to compensate for some other transactions.

[–] brucethemoose@lemmy.world 1 points 2 hours ago

AFAIK some outputs are made with a really tiny/quantized local LLM too.

And yeah, even that aside, GPT 3.5 is really bad these days. It’s obsolete.

[–] brucethemoose@lemmy.world 1 points 3 hours ago (1 children)

Space Station 14

Thanks, I will have to check this this out.

...You ever play Barotrauma?

[–] brucethemoose@lemmy.world 3 points 3 hours ago* (last edited 3 hours ago) (2 children)

They already have the enormous cost of production sunk though. I understand not paying for marketing, but projected profit goes from "negative" to "massively negative" if they don't at least license it out to streaming.

It's probably something tax related, but still.

[–] brucethemoose@lemmy.world 2 points 3 hours ago (2 children)

I can’t bring myself to finish.

I have the same habit, but recently discovered it's apparently a neurodivergence symptom, heh.

Co-op with friends made me finish BG3 though.

[–] brucethemoose@lemmy.world 10 points 3 hours ago* (last edited 3 hours ago)

"a lot of people around him did the same."

Your friend is in la la land.

I know a rich couple that would love to do this and 100% can't because it'd be ludicrously expensive, even with no kids. And that's in a place way cheaper than New York City.

...Maybe it was more practical when his parents were working, though?

[–] brucethemoose@lemmy.world 1 points 5 hours ago

Bugonia was great!

 

Driving the news: Texas A&M's Andrew Dessler and Rutgers' Robert Kopp organized the response.

  • It gets into the "greening" and agricultural benefits of higher CO2 levels; disputes whether climate change is making hurricanes more intense; and disagrees with many scientists on the potential lower bound of expected warming from doubling CO2 concentrations, among many divides.
  • "When I read the DOE report, I saw a document that does not respect science," he tells Axios via email. "Instead, I saw a document that's a mockery of science."

Axios is short and light on ads, so the whole thing's worth a read.

 

Maybe this instrumental cover is closer:

https://www.youtube.com/watch?v=UZj2ufaIne4

But the guitar in the original sounds so "Rimworld" even if the lyrics/vocals aren't as topical.

 

"We're seeing a unifying moment. The band is back together," MAGA podcaster Jack Posobiec told Axios.

"He gets attacked just relentlessly by the Wall Street Journal in such an uncalled for way, and we have his back 100% against this smearing and this slandering," Charlie Kirk added on his show.

 

Similar to: https://lemmy.world/post/32961209

But I find the extra quotes interesting:

Two sources told Axios the plan would include long-range missiles that could strike deep inside Russia.

Trump said Monday that whenever he speaks to Putin, "I always hang up and say, 'Well, that was a nice phone call.' And then missiles are launched into Kyiv or some other city. And after that happens three or four times, you say, 'Talk doesn't mean anything.'"

A bill circulating in the Senate would impose 500% tariffs on countries that buy Russian oil, but Trump suggested that number was too high and that he could impose 100% tariffs without Senate approval.

 

As to why it (IMO) qualifies:

"My children are 22, 25, and 27. I will literally fight ANYONE for their future," Greene wrote. "And their future and their entire generation's future MUST be free of America LAST foreign wars that provoke terrorists attacks on our homeland, military drafts, and NUCLEAR WAR."

Hence, she feels her support is threatening her kids.

"MTG getting her face eaten" was not on my 2025 bingo card, though she is in the early stage of face eating.

 

"It's not politically correct to use the term, 'Regime Change' but if the current Iranian Regime is unable to MAKE IRAN GREAT AGAIN, why wouldn't there be a Regime change??? MIGA!!

 

Video is linked. SFW, but keep your volume down.

 

In a nutshell, he’s allegedly frustrated by too few policies favorable to him.

 
  • The IDF is planning to displace close to 2 million Palestinians to the Rafah area, where compounds for the delivery of humanitarian aid are being built.
  • The compounds are to be managed by a new international foundation and private U.S. companies, though it's unclear how the plan will function after the UN and all aid organizations announced they won't take part
 

Qwen3 was apparently posted early, then quickly pulled from HuggingFace and Modelscope. The large ones are MoEs, per screenshots from Reddit:

screenshots

Including a 235B/22B active and a 30B/3B active.

Context appears to 'only' be 32K unfortunately: https://huggingface.co/qingy2024/Qwen3-0.6B/blob/main/config_4b.json

But its possible they're still training them to 256K:

from reddit

Take it all with a grain of salt, configs could change with the official release, but it appears it is happening today.

 

This is one of the "smartest" models you can fit on a 24GB GPU now, with no offloading and very little quantization loss. It feels big and insightful, like a better (albeit dry) Llama 3.3 70B with thinking, and with more STEM world knowledge than QwQ 32B, but comfortably fits thanks the new exl3 quantization!

Quantization Loss

You need to use a backend that support exl3, like (at the moment) text-gen-web-ui or (soon) TabbyAPI.

 

"It makes me think that maybe he [Putin] doesn't want to stop the war, he's just tapping me along, and has to be dealt with differently, through 'Banking' or 'Secondary Sanctions?' Too many people are dying!!!", Trump wrote.

view more: next ›