this post was submitted on 31 Jul 2025
26 points (93.3% liked)

Asklemmy

49777 readers
446 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered "automation"?

top 31 comments
sorted by: hot top controversial new old
[โ€“] CanadaPlus@lemmy.sdf.org 20 points 4 days ago* (last edited 11 hours ago) (1 children)

and cannot reliably replace work done by humans

See, that's the crux of it for me. Something with stochastic elements could totally count as automation, but it has to actually replace manual work.

LLMs could be made deterministic, by the way. They just produce the nth best token sometimes instead of the 1st best for the sake of naturalness.

[โ€“] ganymede@lemmy.ml 1 points 4 days ago* (last edited 3 days ago) (1 children)

LLMs could be made deterministic

Good reminder that LLM output could be made deterministic!

Though correct me if I'm wrong, their training is, with few exceptions, very much going to be stochastic? Ofc it's not an actual requirement, but under real world efficiency & resource constraints, it's very very often going to be stochastic?

Personally, I'm not sure I'd argue automation can't be stochastic. But either way, OP asks a good question for us to ponder! The short answer imo: it depends what you mean by "automation" :)

[โ€“] CanadaPlus@lemmy.sdf.org 2 points 3 days ago* (last edited 3 days ago) (1 children)

Uhh, that actually raises some questions about the definition of determinism. If the order in which it sees training material is generated by a single known seed, for example, does that count? What if it's a really bad RNG algorithm, or literally just a complex but obvious pattern?

They are a black box, so in a sense the way they're constructed might as well be totally random.

[โ€“] ganymede@lemmy.ml 2 points 3 days ago* (last edited 3 days ago) (1 children)

good points on the training order!

i was mostly thinking of intentionally introduced stochastic processes during training, eg. quantisation noise which is pretty broadband when uncorrelated, and even correlated from real-world datasets will inevitably contain non-determinism, though some contraints re. language "rules" could possibly shape that in interesting ways for LLMs.

and especially the use of stochastic functions for convergence & stochastic rounding in quantisation etc. not to mention intentionally introduced randomisation in training set augmentation. so i think for most purposes, and with few exceptions they are mathematically definable as stochastic processes.

where that overlaps with true theoretical determinism certainly becomes fuzzy without an exact context. afaict most kernel backed random seeds on x86 since 2015 with the RDSEED instruction, will have an asynchronous thermal noise based NIST 800-90B approved entropy source within the silicon and a NIST 800-90C Non-deterministic Random Bit Generator (NRBG).

on other more probable architectures (GPU/TPU) I think that is going to be alot rarer and from a cryptographic perspective hardware implementations of even stochastic rounding are going to be a deterministic circuit under the hood for a while yet.

but given the combination of overwhelming complexity, trade secrets and classical high entropy sources, I think most serious attempts at formal proofs would have to resign to stochastic terms in their formulation for some time yet.

there may be some very specific and non-general exceptions, and i do believe this is going to change in the future as both extremes (highly formal AI models, and non-deterministic hardware backed instructions) are further developed. and ofc overcoming the computational resource hurdles for training could lead to relaxing some of the current practical requirements for stochastic processes during training.

this is ofc only afaict, i don't work in LLM field.

[โ€“] CanadaPlus@lemmy.sdf.org 2 points 2 days ago* (last edited 2 days ago) (1 children)

In practice there really is no incentive to avoid stochastic or pseudorandom elements, so don't hold your breath, haha. It's a pretty academic question if you could theoretically train an LLM without any randomness.

Thanks for writing that up, I learned a few things.

[โ€“] ganymede@lemmy.ml 2 points 1 day ago

Exactly!

Thanks for reading :) Realised i was going on a bit of a rant, but thought why not keep going lol

[โ€“] TheOubliette@lemmy.ml 11 points 4 days ago

Automation is just using technology to replace human labor, so yes. The exact mechanism doesn't change that. "AI" is a buzzword but LLMs have replaced human labor already in various ways even though most of the applications are hype / BS. For example, it has certainly taken a bite out of stock images and product graphic design.

Individual capitalists must seek out automation because reducing labor cost without decreasing productivity means a higher profit for them. Capital in aggregate seeks automation because it disciplines labor, means you can threaten and mistreat labor more easily. In that sense "AI" is serving the same purpose as historical automation even when it fails to substitute labor as a productive aspect. Companies can threaten their employees with "AI" that doesn't work and they can rebrand firings as layoffs using media discourse that overhypes "AI" on their behalf, it is part of the PR universe.

[โ€“] howrar@lemmy.ca 8 points 4 days ago (1 children)

Consider this example:
You have a road that forks and joins up again. You need to reach the end of this road and have a vehicle that takes you there without your input. At the fork, it will flip a coin and choose to either take the left fork or the right fork depending on the results. This agent is therefore stochastic. But no matter what it chooses, it'll end up at the same place at the same time. Do you consider this to be automation?

[โ€“] patatas@sh.itjust.works 3 points 4 days ago

This is a crisp answer, nice one.

[โ€“] sbv@sh.itjust.works 8 points 4 days ago

If it's "good enough" for the task, yes.

Many tasks have loose success parameters, and an acceptable failure rate. If the automation fits in those, and it simplifies my day, then it's reasonable automation.

[โ€“] Cowbee@lemmy.ml 7 points 4 days ago (1 children)

Depends on the use-case. AI isn't a panacea, and the excessively pro-AI camp is deeply unserious, but it does have some cases it can function fairly well at, like stock image creation, that doesn't need to have backdrops, props, actors, etc for every random idea. That's the extent of it, really.

[โ€“] patatas@sh.itjust.works -2 points 4 days ago (1 children)

I mean, it can't really do 'every random idea' though, right? Any output is limited to the way the system was trained to approximate certain stylistic and aesthetic features of imagery. For example, the banner image here follows a stereotypically AI-type texture, lighting, etc. This shows us that the system has at least as much control as the user.

In other words: it is incredibly easy to spot AI-generated imagery, so if the output is obviously AI, then can we really say that the AI generated a "stock image", or did it generate something different in kind?

[โ€“] Cowbee@lemmy.ml 4 points 4 days ago (1 children)

"Every random idea" meaning AI can take the place of some stock photos, and not all, as in we don't need to do the traditional stock photo process for every random idea, AI can replace some of them. As for the quality of the output, that's something that varies from case to case, and further the idea isn't to replace human art in general, but to exist alongside in instances where a human artisinally producing it isn't the purpose, but the traditional means to an end. Therefore, it doesn't actually matter if we can tell or not, the goal isn't to decieve, but even that is getting blurrier and blurrier as AI improves.

Essentially, if an AI image can fulfill the same purpose as a stock image, then the act of creating the stock image through traditional means is just unnecessary expenditure of effort. We don't traditionally appreciate stock images for their artistic merit, but for a visual function, be it to convey information or otherwise, not because our goal is to appreciate and understand the artistic process a human went through to create it.

[โ€“] patatas@sh.itjust.works -1 points 4 days ago (1 children)

If you can tell it was produced in a certain way by the way it looks, then that means it cannot be materially equivalent to the non-AI stock image, no?

[โ€“] Cowbee@lemmy.ml 4 points 4 days ago (1 children)

These are distinct hypotheticals.

In the first case, the question is if it is equivalent, does the use-value change? The answer is no.

In the second case, the question is "if we can tell, does it matter?" And the answer is yes in some cases, no in others. If the reason we want a painting is for its artisinal creation, but it turns out it was AI-generated, then this fundamentally cannot satisfy the use of an image for its appretiation due to artisinally being generated. If the reason we want an image is to convey an idea, such that it would be faster, easier, and higher quality than an amateur sketch, but in no way needs to be appreciated for its artisinal creation, then it does not matter if we can tell or not.

Another way of looking at it is a mass-produced chair vs a hand-crafted one. If I want a chair that lets me sit, then it doesn't matter to me which chair I have, both are equivalent in that they both satisfy the same need. If I have a specific vision and a desire for the chair as it exists artisinally, say, by being created in a historical way, then they cannot be equivalent use-values for me.

[โ€“] patatas@sh.itjust.works -2 points 4 days ago (1 children)

This argument strikes me as a tautology. "If we don't care if it's different, then it doesn't matter to us".

But that ship has sailed. We do care.

We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.

And I don't believe it's possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.

[โ€“] Cowbee@lemmy.ml 3 points 4 days ago (1 children)

"We" in this moment is you, right now. If the end product is the same, then it is the same. If the process is the use-value then it matters, but if not, it doesn't.

Ideas and symbols matter, sure, but not because of any metaphysical value you ascribe them, but the ideas they convey.

[โ€“] patatas@sh.itjust.works 0 points 4 days ago (1 children)

First you said "it doesn't matter if we can tell or not", which I responded to.

So I'm confused by your reply here.

[โ€“] Cowbee@lemmy.ml 3 points 4 days ago (1 children)

I quite literally stated that it matters in some cases and not in others.

If the process is the purpose, then it matters. If the end product is the purpose, then it largely doesn't.

[โ€“] patatas@sh.itjust.works 1 points 4 days ago (1 children)

I am saying that we can no longer meaningfully separate the two things.

[โ€“] Cowbee@lemmy.ml 0 points 4 days ago (1 children)

Cognition and AI? We absolutely can, just because some people fail doesn't mean it's intrinsic.

[โ€“] patatas@sh.itjust.works 0 points 4 days ago (1 children)

No, I'm saying we can no longer meaningfully separate the product and the process.

[โ€“] Cowbee@lemmy.ml 2 points 4 days ago (1 children)

We do by default in capitalism, that's the basis of commodity fetishism.

[โ€“] patatas@sh.itjust.works 1 points 4 days ago (1 children)

That's not a rejection of what I said, so I assume you agree.

[โ€“] Cowbee@lemmy.ml 4 points 4 days ago (1 children)

The product exists as a use-value. How it is created does not matter for the user of the use-value unless the process was the use, ie art. Labor is all that matters from the worker's perspective, they get none of what they create unless they are paid in kind. What you appear to be arguing is that labor involving AI is an almost supernatural corrupting force, like a calculator.

[โ€“] patatas@sh.itjust.works 0 points 4 days ago (1 children)

Ah to be fair i did misinterpret your previous statement.

But no, I am arguing that we are not able to ignore knowledge of the production process. Nothing mystical about that.

[โ€“] Cowbee@lemmy.ml 3 points 4 days ago (1 children)

What does "ignoring knowledge of the production process" even mean? Who is doing the ignoring? What knowledge are we talking about? You've said before that using AI is instrinsically damaging, but have only shown proof that AI can be misused if we don't understand its limitations, a sentiment the article you linked echos exactly but you appear to disagree with.

[โ€“] patatas@sh.itjust.works 1 points 4 days ago (1 children)

sigh

You said it doesn't matter if we can tell how something was made.

This conversation is over. Thanks

[โ€“] Cowbee@lemmy.ml 3 points 4 days ago

I said from the consumer's point of view, it doesn't matter unless the process is the commodity, ie art. I said that if the process isn't the commodity, then from the consumer's perspective, they are roughly equivalent if both are identical end-products.

From the laborer's point of view, throwing a few prompts into an LLM is hardly an expression of artistry, art has use-value when the medium is intimately grappled with as a form of expression, whatever form that may be. If the laborer is just trying to show a floor plan, for example, they don't need to draw it by hand, the information is the goal. AI is fine if advanced enough to help with that.

That's why it's important to correctly analyze tools, their limitations, and where they could have potential use, rather than insisting on avoiding tool usage for not making us "struggle" as hard.

Ask marketing. They will tell you that everything is AI.

[โ€“] m532@lemmygrad.ml 3 points 4 days ago

NNs do give a consistent output. If you put the same inputs and seed in you get the exact same output every time.

The algorithm represented by the NN is fully deterministic, but way too large for a human to wrap their head around it.

All machines are automation, i think?