this post was submitted on 15 Apr 2026
30 points (100.0% liked)

technology

24328 readers
223 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

This paper discovered the continuous math equivalent of the digital NAND gate. It turns out that a single binary operation paired with the constant 1 can generate every single standard elementary function. That operation is defined as eml(x,y)=exp(x)-ln(y). You can reconstruct constants like pi and the imaginary unit alongside basic addition and complex calculus tools using nothing but this one function.

The implications for machine learning and symbolic regression are massive. Normally when artificial intelligence tries to discover mathematical formulas from data it has to search through a chaotic space of different operators and syntax rules. Because the EML operator turns every mathematical expression into a uniform binary tree of identical nodes the search space becomes perfectly regular. You can basically treat a mathematical formula like a neural network circuit. The paper shows that when you train these EML trees using standard gradient optimizers like Adam the weights can actually snap to exact closed-form symbolic expressions instead of just giving fuzzy numerical approximations.

This finding could change how we design analog circuits and specialized computing hardware. If you only need a single instruction to execute any complex mathematical function you could build physical hardware or single instruction stack machines optimized purely for the EML operation. The fact that this was discovered by computationally stripping down a calculator rather than through purely theoretical derivation highlights how much structural beauty is still hiding in basic math.

top 14 comments
sorted by: hot top controversial new old
[–] Are_Euclidding_Me@hexbear.net 0 points 5 hours ago* (last edited 4 hours ago)

Wow! This crank is really fixated on how many buttons a scientific calculator has! Poor baby thinks square roots are scary and yet feels as though they're in a position to write a worthwhile paper about math.

Sorry for being a grumpy asshole about this one, it's just extra annoying to read complete bullshit when it's in your chosen field. Likely LLM "assisted" bullshit too, judging by the multiple citations to papers from the 1700s that no one has read in hundreds of years, and that's assuming they're even real papers and not hallucinations the LLM made up that the "author" was too lazy to check.

A bit of a rant:You know what really annoys me about this? It would be a fun toy paper by an amateur interested in math without the LLM stink all over it.

The author of this paper got interested in a legitimately cool little problem: if you keep breaking calculator buttons, how many can you break before you stop being able to do all the calculations you want to do? That's a fun, neat little problem, ripe for a project for an advanced high schooler/early undergraduate!

But instead of playing around with this, having a good time, stretching their math muscles and learning some neat facts/techniques/tricks, they asked an LLM what the answer is, the LLM told them what a smart special person they are for thinking about this problem, and we got this shit-ass paper about this apparent "breakthrough".

Shit sucks, LLM's are a god damn nuisance.

[–] woodenghost@hexbear.net 6 points 14 hours ago* (last edited 14 hours ago)

"Discovering" the logarithmic identities and using AI to fluff them up to a paper is not science. Nothing new in this paper, just playing around with undergrad math.

In fact, the EML Sheffer operator (3) is as simple as it appears, and in principle the article could end here

I wish it had.

[–] Speaker@hexbear.net 5 points 15 hours ago* (last edited 14 hours ago)

Meh. arXiv has plenty of cool-sounding garbage papers. Until this has some peer review behind it (or, like, one non-AI blog post from anyone near the field), this is indistinguishable from a crank paper.

ETA: Chicken and egg

xy = exp(log(x) + log(y))
x + y = log(exp(x) exp(y))

Both nice things that are true, but awfully difficult to define without some external definition of the other one.

Edit 2: Now the actual snark; I suspect if anything comes of this, it will be approximately equivalent to the physician who invented the trapezoidal rule in 1994: https://pubmed.ncbi.nlm.nih.gov/8137688/

[–] FumpyAer@hexbear.net 4 points 15 hours ago

We're going to need paradigm shifts like this in chip making as we reach the physical limits of how small transistors can be.

[–] GiorgioBoymoder@hexbear.net 4 points 19 hours ago (1 children)

what the fuck? yeah this seems like a really big deal.

[–] yogthos@lemmygrad.ml 4 points 18 hours ago

yeah seems like there are some big implications for how we design hardware here

[–] quarrk@hexbear.net 4 points 20 hours ago

That’s quite cool

[–] Soot@hexbear.net 3 points 20 hours ago* (last edited 20 hours ago) (1 children)

I am perhaps underqualified to understand the importance of this. But I was taught 20 years ago that we can perform all logical operations from NAND gates, presumably we found it out some decades/centuries previous, and one would assume it consequently applies to everything in computing already.

But perhaps the 'continuous math equivalent' is fundamentally different by some means, they aren't words I understand.

[–] yogthos@lemmygrad.ml 4 points 18 hours ago (2 children)

The jump from NAND to this EML operator feels counterintuitive at first glance. You string enough NANDs together and you get an arithmetic logic unit that can easily add and multiply binary digits. The catch is that digital chips do not actually understand continuous math natively. When you ask a modern processor to calculate a sine wave or a natural logarithm, it cannot just do it in one physical step. It relies on floating point units running complex approximations like the CORDIC algorithm or querying huge precomputed lookup tables. These are built using millions of individual transistors and burn a lot of power and clock cycles just to approximate a single transcendental function.

The EML operator offers a primitive for the continuous domain, even operating in the complex plane internally. Instead of building a tower of digital approximations, the paper suggests we could build a uniform circuit architecture where the basic physical building block natively computes eml(x,y)=exp(x)−ln(y). If you can engineer a reliable analog component or a specialized FPGA block that executes this single operation, you avoid all the hardware complexity inherent in current chip designs. Every mathematical expression simply becomes a binary tree of these identical nodes. This is fundamentally different because it replaces a bunch of highly specialized instruction sets with a completely homogeneous continuous structure.

This has massive implications for how we design chips. For example, when you are doing inference or handling quantized weights, standard floating point units are massive bottlenecks in terms of both die space and energy. An EML based architecture could theoretically allow a chip to evaluate complex elementary functions directly without the overhead of digital approximation. The variables simply flow through a physical circuit of identical elements.

[–] Soot@hexbear.net 2 points 5 hours ago* (last edited 5 hours ago) (1 children)

Ah, thank you for explaining, I think I understand! This is a seeming theoretical demonstration that by making hardware that performs just a single function in an analog capacity, we should be able to do all analog (or 'continuous') functions from it, and consequently we would only need one reliable piece of analog hardware that could massively up efficiency and speed by replacing analog-approximation functions that computers do now.

In which case, hecka fuckin' cool. Obviously as other commenter says, compounding errors could prove an issue, but it's not like that's an unmanageable issue.

[–] yogthos@lemmygrad.ml 1 points 2 hours ago

Yeah, this would actually need to be tried out to see what happens. But it's neat to see a completely new way to approach the problem.

[–] woodenghost@hexbear.net 2 points 14 hours ago (2 children)

If you can engineer a reliable analog component or a specialized FPGA block that executes this single operation

But there's absolutely no reason to think we could in a way that doesn't compound errors rapidly. And any attempt to do so would just be reinventing the wheel or rather, the calculator and the algorithms used to approximate those functions. Also, FPGA wouldn't be continuous, it would still be digital.

[–] Soot@hexbear.net 2 points 5 hours ago

Signal/CPU/RAM errors in computers do already introduce error rates, of course. So we can handle a certain level errors in everyday computing.

The only challenge then in this regard is making an analog component that is indeed reliable enough to a useful level of precision. It's not something we typically do, so it is an uncertain feasability, but there's no reason to think it's impossible either.

[–] yogthos@lemmygrad.ml 2 points 11 hours ago

I mean we find out what's possible trying to do new things. The only way to know is to actually give it a shot, and maybe we'll find out that FPGAs don't work, maybe you need an analog circuit. We learn by doing.