this post was submitted on 15 Apr 2026
30 points (100.0% liked)

technology

24328 readers
194 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

This paper discovered the continuous math equivalent of the digital NAND gate. It turns out that a single binary operation paired with the constant 1 can generate every single standard elementary function. That operation is defined as eml(x,y)=exp(x)-ln(y). You can reconstruct constants like pi and the imaginary unit alongside basic addition and complex calculus tools using nothing but this one function.

The implications for machine learning and symbolic regression are massive. Normally when artificial intelligence tries to discover mathematical formulas from data it has to search through a chaotic space of different operators and syntax rules. Because the EML operator turns every mathematical expression into a uniform binary tree of identical nodes the search space becomes perfectly regular. You can basically treat a mathematical formula like a neural network circuit. The paper shows that when you train these EML trees using standard gradient optimizers like Adam the weights can actually snap to exact closed-form symbolic expressions instead of just giving fuzzy numerical approximations.

This finding could change how we design analog circuits and specialized computing hardware. If you only need a single instruction to execute any complex mathematical function you could build physical hardware or single instruction stack machines optimized purely for the EML operation. The fact that this was discovered by computationally stripping down a calculator rather than through purely theoretical derivation highlights how much structural beauty is still hiding in basic math.

you are viewing a single comment's thread
view the rest of the comments
[–] woodenghost@hexbear.net 2 points 19 hours ago (2 children)

If you can engineer a reliable analog component or a specialized FPGA block that executes this single operation

But there's absolutely no reason to think we could in a way that doesn't compound errors rapidly. And any attempt to do so would just be reinventing the wheel or rather, the calculator and the algorithms used to approximate those functions. Also, FPGA wouldn't be continuous, it would still be digital.

[–] Soot@hexbear.net 2 points 10 hours ago

Signal/CPU/RAM errors in computers do already introduce error rates, of course. So we can handle a certain level errors in everyday computing.

The only challenge then in this regard is making an analog component that is indeed reliable enough to a useful level of precision. It's not something we typically do, so it is an uncertain feasability, but there's no reason to think it's impossible either.

[–] yogthos@lemmygrad.ml 2 points 16 hours ago

I mean we find out what's possible trying to do new things. The only way to know is to actually give it a shot, and maybe we'll find out that FPGAs don't work, maybe you need an analog circuit. We learn by doing.