this post was submitted on 02 Jan 2026
199 points (92.3% liked)

Technology

78511 readers
3076 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Weight Comparison

Model Weight (grams) Screen Size
LG Gram Pro 16 (2026) 1,199 16-inch
MacBook Air 15 (M4/M3) 1,510 15-inch
MacBook Pro 14 (M5/M3) 1,550-1,600 14-inch
MacBook Pro 16 (M3+) 2,140-2,200 16-inch
you are viewing a single comment's thread
view the rest of the comments
[–] TheOakTree@lemmy.zip 1 points 1 week ago* (last edited 1 week ago)

Oh, certainly. The reason I focused on speed is because an idiot using a shoddy LLM may not notice it's hallucinations or failures as easily as they'd notice it's sluggishness.

However, the meaningfulness of the LLM's responses are a necessary condition, whereas the speed and convenience is more of a sufficient condition (which contradicts my first statement). Either way, I don't think the average users knows what hardware they need to leverage local AI.

My point is that this "AI" hardware gives a bad experience and leaves a bad impression of running AI locally, because 98% of people saw "AI" in the CPU model and figured it should work. And thus, more compute is pushed to datatcenters.