this post was submitted on 28 Apr 2026
239 points (98.4% liked)

Technology

84352 readers
3016 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] zr0@lemmy.dbzer0.com 25 points 6 days ago (2 children)

It is actually very interesting to see what your code actually causes on the CPU. Nowadays you have quite large instruction sets which help you with many operations what were manual in the past. But even then, you at this point you start counting CPU cycles. Back then, a multiplication by 4, written in C, probably resulted in using 4 CPU cycles of additions, instead of just shifting the bits 2 places left in one cycle. So you just saved 3 cycles. Sounds like nothing, but with only one core at 33MHz and hundreds of such calculations, it will have a much larger impact.

[–] luciferofastora@feddit.org 6 points 6 days ago

I started studying "technical IT" for two years before changing to a less technical version that I ended up enjoying more (physics is fun to learn, but I don't wanna calculate that shit).

One of the most valuable things to come out of it is one class where we worked our way up all the way from logic gates to the functions of an ALU and a rough look at CPUs and memory architecture. Probably would have gone deeper in a follow-up class I never ended up taking.
Point of the course was that one of the focus options for that course featured micro-controllers and embedded systems, including low-level optimisation (the typical memory constraints might be getting more lax, but learning it isn't a bad idea).

I don't remember most of the details, I'm afraid, but it was an interesting insight into the things I take for granted when working in higher level languages.

[–] jdr@lemmy.ml 2 points 6 days ago (1 children)

Citation needed! Processors have had multiplication in silicon since forever, and compiler writers aren't stupid. You can even check on https://godbolt.org/ with old versions. I bet you can't find a compiler from 1999 that won't optimize an unsigned integer multiply to a bit shift without turning off optimisations.

[–] zr0@lemmy.dbzer0.com 2 points 6 days ago

C existed for almost a decade, before they added this specific optimization to compilers in late 1970

I was not referencing particularly RCT and was just trying to explain, what optimizations mean on that level, be it automatic or manual.