this post was submitted on 24 Mar 2026
7 points (100.0% liked)
Hardware
6677 readers
169 users here now
All things related to technology hardware, with a focus on computing hardware.
Some other hardware communities across Lemmy:
- Augmented Reality
- Gaming Laptops
- Laptops
- Linux Hardware
- Linux Phones
- Monitors
- Raspberry Pi
- Retro Computing
- Virtual Reality
Rules (Click to Expand):
-
Follow the Lemmy.world Rules - https://mastodon.world/about
-
Be kind. No bullying, harassment, racism, sexism etc. against other users.
-
No Spam, illegal content, or NSFW content.
-
Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.
-
Please try and post original sources when possible (as opposed to summaries).
-
If posting an archived version of the article, please include a URL link to the original article in the body of the post.
Icon by "icon lauk" under CC BY 3.0
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because it scales logarithmically or exponentially just fine. Precision here matters more than accuracy.
You get a handful of values around zero. A handful of medium values. And a handful of increasing large values.
Like in unsigned 4-bit integers(for AI) you'd likely have something binary/exponential like.. 0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384...
Instead of 0 to 15(linear.) this is also how posits work. John Gustafson designed posits with AI in mind and explains better than I could how these tiny 4/8 bit types can fill in for much bigger types with minimal cons and massive pros(reduced memory and reduced compute). Like 16,384 is what 14 typical bits gets you(2^14). But by scaling you can get a similar range(precision) sacrificing 'fine grain' accuracy that AI doesn't really benefit from. Which is kind of similar to how floats work with sign bit, exponent, and mantissa. But most times people want binary floats for AI.
So you might get something like:
0, 1/8, 1/4, 1/2, 1, 2, 4, 8 (also negatives)
Or even:
0, 1/32, 1/16, 1/4, 1, 4, 16, 32 (also negatives)
Or even:
0, 1/1000, 1/100, 1/10, 1, 10, 100, 1000 (also negatives)
Honestly, AI doesn't really care as long as you stick with the same scheme.