this post was submitted on 27 Mar 2026
48 points (87.5% liked)

LocalLLaMA

4686 readers
23 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

[deleted by user]

top 19 comments
sorted by: hot top controversial new old
[–] Zwuzelmaus@feddit.org 9 points 1 month ago (2 children)

Well done, meat popsicle :)

[–] SuspciousCarrot78@lemmy.world 4 points 1 month ago* (last edited 26 minutes ago)

[deleted by user]

[–] SuspciousCarrot78@lemmy.world 3 points 1 month ago* (last edited 1 hour ago)

[deleted by user]

[–] muzzle@lemmy.zip 4 points 1 month ago* (last edited 1 month ago) (1 children)

How can you not reference this gem of an SCP entry?

PS This sounds super interesting, looking forward to try it.

PPS I am waiting for the day when I can run this on my phone.

[–] SuspciousCarrot78@lemmy.world 5 points 1 month ago* (last edited 26 minutes ago) (1 children)
[–] muzzle@lemmy.zip 2 points 1 month ago (1 children)

I am waiting for the day when you can run this kind of models directly on the phone.

[–] SuspciousCarrot78@lemmy.world 4 points 1 month ago* (last edited 1 hour ago) (1 children)
[–] muzzle@lemmy.zip 1 points 1 month ago (1 children)

Unfortunately i need to use android.

[–] neblem@lemmy.world 3 points 4 weeks ago (1 children)

Termux + CPU inference +llamacpp can get ~4B models running, even if slowly, on 5 year old flagship phones.

[–] muzzle@lemmy.zip 2 points 4 weeks ago (1 children)

I'm experimenting with "tool neuron" and "off grid"

[–] neblem@lemmy.world 1 points 4 weeks ago

Neat, cool to see these all-in-one native Android tools get so far.

[–] SuspciousCarrot78@lemmy.world 2 points 1 month ago* (last edited 28 minutes ago)

[deleted by user]

[–] pound_heap@lemmy.dbzer0.com 1 points 1 month ago (1 children)

This looks awesome! Can you share the real life use cases for this? What are you using it for?

[–] SuspciousCarrot78@lemmy.world 3 points 1 month ago* (last edited 28 minutes ago) (1 children)
[–] pound_heap@lemmy.dbzer0.com 3 points 1 month ago (1 children)

Nice! You kinda answered my next question already with this web tool. I was curious if you are getting any useful results from the model itself without feeding it with good data first or relying on hardcoded tools. 4b model must be really dumb for anything even little complicated. I see you recommend to run two models - is it in parallel or the router can control backend and switch models?

[–] SuspciousCarrot78@lemmy.world 1 points 1 month ago* (last edited 28 minutes ago)

[deleted by user]

[–] ZombiFrancis@sh.itjust.works 1 points 1 month ago (1 children)

"List the article's concrete claims about permit status and turbine operations, each with support."

  • EPA position: these turbines require permits under the Clean Air Act.

Not quite though. The article cited EPA's policy as per a former EPA enforcement staffer who was explicitly stating the EPA is not requiring that here and has made rules deferring to the state and local authorities. The guy was saying the EPA should be acting, but isn't. The article was clever with it, but that's all the more reason.

[–] SuspciousCarrot78@lemmy.world 1 points 1 month ago* (last edited 27 minutes ago) (1 children)
[–] ZombiFrancis@sh.itjust.works 0 points 1 month ago

Kind of. It isn't wrong, but it is a crucial omission that it's interviewing a former EPA enforcement guy (i.e. not current) about current enforcement policy, (which is radically changing under Zeldin.) So the model's interpretation on whether the state will hold to federal pressure becomes imprecise since it's really this guy stating there's actually a lack of federal pressure.

But it does rightfully note information is not in the article to answer, which is neat.

Because... for context not directly in the article, technically if EPA defers to the state, then Mississippi saying temporary permit exemption actually applies here satisfies the permit requirement, which Buckheit has to know. (Which directly explains the lack of federal pressure.) Citing the policy in January was a clever non-answer from the EPA. They're actually saying state and federal policies are NOT in conflict.

Also, I'm not trying to dismiss any of this, more trying to provide an insight that might help with accuracy. I have a bit of knowledge on this specific subject, so I thought I'd note a point where I can measure an inaccuracy.

These kinda of articles can be really sneaky about claims and statements. Mostly minor and innocuous, but an LLM doesn't know the difference. Like, this caught that Buckheit is talking about what should be happening under previous admins when he was involved, but that's specifically not what the EPA is doing anymore, which the LLM appears to have missed in part. Which to me, that part was the primary purpose of the article.