this post was submitted on 18 Oct 2025
26 points (93.3% liked)

PC Master Race

18846 readers
14 users here now

A community for PC Master Race.

Rules:

  1. No bigotry: Including racism, sexism, homophobia, transphobia, or xenophobia. Code of Conduct.
  2. Be respectful. Everyone should feel welcome here.
  3. No NSFW content.
  4. No Ads / Spamming.
  5. Be thoughtful and helpful: even with ‘stupid’ questions. The world won’t be made better or worse by snarky comments schooling naive newcomers on Lemmy.

Notes:

founded 2 years ago
MODERATORS
 

Hello,

I've been looking into new laptops, trying to get back into Tech work after about ten years out due to a couple spinal injuries, and hoping for some advice.

TLDR: Laptop for coworking space, flexible AMD hardware preference, NPU for LLM work, Linux SysAdmin practice, metal case/chassis for heat dissipation, as budget friendly as possible.

I'd need a laptop due to limited space and to stay mobile depending on any particular job; mostly remote and may need to go to a local coworking space.

I'm hesitant about LLM usage, but know it's here at least now and I should be familiar with the technology.

I'm considering experimenting with mostly smaller scale LLMs, and maybe whatever else I can get working, 'feasibly', locally;

  • Building as much of my own models if I am able to
  • Creating/curating training data sets
  • Training models on those as best as capable
  • Experimenting with suitably sized models from my training or pre-trained
  • Testing 'agents’/methods/whatever else these types of quality improvements are being called

So, I'm looking at machines with NPUs for that, though I hadn't heard of these until I started looking into machines just recently. Any advice, guidance, rants, lectures, preaches related to that are all welcome and appreciated. Truly.

I trend towards AMD hardware. Just a preference of which corporation to send my money/support to. It may be that higher performing, lower priced, or more "ethical/moral/customer considerate" hardware exists. I'm unfamiliar, though, and these preferences were admittedly made from younger years and minimal information. Please correct as you see fit.

Additionally, I've had better results with Linux and AMD, when I've needed to use it; for continuing to use older machines, Linux SysAdmin practice, or Windows avoidance. Though, LLM work might be in early stages in Linux at the moment.

I am on the fence/generally consider against HP with how they've been with their products. Similarly, any statements or experiences with HP's computers; advising avoidance, stating they're not as problematic as their printers, or anything you feel relevant or helpful; will be highly regarded.

Yet, the lowest priced laptop I've found, which also has some local LLM options, seems to be this HP Omnibook 3 14' (AMD Ryzen AI 7 350, 16G).

HP Omnibook, HP Website

I also found this Lenovo Ideapad 5 with about the same hardware. Yet, higher price even before tax.

Lenovo Ideapad, Costco Website

I've mostly found these CPUs with 16G of RAM. Preferably, I'd like 32G, for some additional memory with LLM training/usage practice. Outside of that alone, I'd likely do fine with 16.

I'd considered metal bodies/chassis for heat dissipation if I'm working the processors with the LLM practice; likely adding a laptop stand and/or cooling pad (if not included directly in the stand).

Sorry for the novel, but hopefully that helps with describing what I'm hoping towards.

I very much appreciate you taking the time to read my post. Thank you very much for any guidance or statements you may feel able to make.

Have a great weekend.

top 12 comments
sorted by: hot top controversial new old
[–] CIA_chatbot@lemmy.world 7 points 2 days ago

Don’t buy anything by HP, they literally have the worst hardware now-

[–] artifex@piefed.social 5 points 2 days ago (1 children)

“Local LLM” and “budget friendly” are mutually exclusive at the moment. Just running prompt processing and inferencing on a very small model can be slow. Training - even for a tiny classifier model - would be impossible. I would strongly recommend getting a workhorse like a slightly used Lenovo T series and just renting GPU time for a few bucks an hour from someone like vast.ai before deciding if you really want to get into it locally (which will cost $$$).

[–] vimmiewimmie@slrpnk.net 3 points 2 days ago (1 children)

That's fair. I suppose I took their marketing statements for NPUs closer to face value than what feasibly works.

I was hoping for something entry level to work with/recreate/train smaller models to avoid additional investment in extra services. But, outside of simply running the apps the companies are pushing into their devices, and some other community ones, local maybe doesn't have much use without massive hardware.

Wdyt?

[–] artifex@piefed.social 3 points 2 days ago

I haven’t seen any consumer NPUs that will aid with training. They’re mainly used for accelerating image effects in photoshop or blurring your background in zoom. Most aren’t even any good for inference offload. Inference and especially training take a good GPU with a large amount of VRAM (expensive) or something like a ryzen strix halo with a ton of system RAM (also expensive). With model quantization you might run modestly sized models, but you would be training tiny, tiny models at best. Think thousands of parameters, not the billions or trillions used in the LLMs you know and love.

[–] Agility0971@lemmy.world 2 points 2 days ago (1 children)

NPU? Is that a marketing term for underpowered GPUs or something? I would not use a laptop for any substantial GPU usage if I could help it. Use the laptop as a laptop and a server with GPUs for heavier / noisier workload if possible. I run ollama on a desktop at home and have them connected my devices through tailscale virtual network. laptop stays silent with "long" battery life while desktop takes the load remotely. I think this workflow can be used for more than just ollama, but kind of depends. VPS might be an option too.

[–] vimmiewimmie@slrpnk.net 3 points 2 days ago (1 children)

NPUs are meant to offload specific tasks, those which CPUs do less well than GPUs, but while using less electricity than both. So far as manufacturers say.

Thanks for the tips!

[–] iturnedintoanewt@lemmy.world 5 points 2 days ago

You should check how good the coverage for NPU hardware is. As in, whether your specific app you want to use actually leverages NPU libraries at all. Many don't, and just focus on easier gpu drivers and that's that.

Your laptops NPU might have Linux compatible drivers. These might be harder or easier to install and run. But your app might not even use them, at all. I'm looking at you, GPT4ALL.

[–] ictinus@lemmy.world 2 points 2 days ago (1 children)

Is there anything in the Linux space making use of NPUs?

[–] vimmiewimmie@slrpnk.net 1 points 2 days ago* (last edited 1 day ago)

From what I've seen, AMD only has compiling models on Linux, Intel has some OpenVINO or something which allows usage of models. But I might be wrong.

Otherwise, NPU use seems limited to mostly Windows currently.

Edit: I just found this. https://github.com/amd/gaia

Though, still limited. Just wanted to put it out there for people.

[–] msokiovt@lemmy.today 2 points 2 days ago (1 children)

I'd take a look at something like System76 with their stuff, or potentially something like Tuxedo. If you want Linux pre-installed on a laptop, look for stuff that already has is supported OOTB.

Of course, there are people refurbishing laptops and slapping Linux on them, as many ThinkPads and similar devices from Lenovo are getting Linux slapped on there. I'd stick with Mint, Pop_OS!, CachyOS, or Nobara Project, unless you want X11/XLibre by default. If XLibre by default, there are many distros that are implementing it instead of XOrg's implementation of X11.

[–] vimmiewimmie@slrpnk.net 1 points 2 days ago (1 children)

Thank you.

Those are very expensive machines, but certainly would be capable of the work. I'm not sure I can afford it.

[–] msokiovt@lemmy.today 1 points 2 days ago

I'd probably just save a little for them, and once you invest enough for a powerful enough machine, then I'd purchase it with a little bit of money left. That's how I'd do it, though your mileage may vary.