76
submitted 3 months ago by BrikoX@lemmy.zip to c/apple@lemmy.zip

Long lists of instructions show how Apple is trying to navigate AI pitfalls.

top 24 comments
sorted by: hot top controversial new old
[-] tacticalsugar@lemmy.blahaj.zone 62 points 3 months ago

You can't tell an LLM to not hallucinate, that would require it to actually understand what it's saying. "Hallucinations" are just LLMs bullshitting, because that's what they do. LLMs aren't actually intelligent they're just using statistics to remix existing sentences.

[-] blackluster117@possumpat.io 26 points 3 months ago

I wish people would say machine learning or LLMs more frequently instead of AI being the buzzword. It really irks me. IT'S NOT ACCURATE! THAT'S NOT WHAT IT IS! STOP DEMEANING TRUE MACHINE CONSCIOUSNESS!

[-] tacticalsugar@lemmy.blahaj.zone 3 points 3 months ago

Gotta let the people know they're getting scammed with false advertising

[-] laughterlaughter@lemmy.world 3 points 3 months ago

Are there any practical examples of applied AI these days? Like, used in robotics or computer applications?

[-] BallsandBayonets@lemmings.world 9 points 3 months ago

No LLM that is being advertised to the public is capable of original thought or self-awareness, so no. There are no AIs.

I could see some of these LLMs getting close to being VIs (Virtual Intelligence, a reference from the video game Mass Effect). Realistic imitation but not true intelligence.

[-] laughterlaughter@lemmy.world 0 points 3 months ago* (last edited 3 months ago)

I didn't say LLMs.

I said AI.

So.......

Are there any practical examples of applied AI these days?

Edit: lol at the downvotes for asking a genuine question!

[-] BallsandBayonets@lemmings.world 7 points 3 months ago

No. Practical examples of things that don't exist... also don't exist.

[-] laughterlaughter@lemmy.world 1 points 3 months ago

Well shucks. Thanks. I hope we get AGI before I die.

[-] Mikina@programming.dev 5 points 3 months ago* (last edited 3 months ago)

Well, LLMs are subset of machine learning, and machine learning is a subset of (really old by now) artificial intelligence field. So, LLMs do count as a applied use of AI.

Aside from ML, you have a lot of ways how to actually make an AI agent for Agent-based models, which is what you mostly use AI for, from GOAP, Behavior Trees or other perception, decision and reasoning algorithms, that could IMO be considered closer to "AIs" than LLMs are. Most of UAVs, robotics, game NPCs or even simple chat/crawl bots are agents by definition and do fall under the umbrella of AI. Even a simple if/then bot does.

Agent-based simulations are also used for biology, economics, social simulations or modeling in various other fields. That's also AI.

However, you're probably asking about AGIs, and nope, we can't do those yet as far as I know.

[-] laughterlaughter@lemmy.world 2 points 3 months ago

Thank you for your insightful and detailed answer. Yes, I meant to say AI other than ML or LLMs, so I stand corrected.

I hope we get to see an application of AGI before my time passes.

[-] Revan343@lemmy.ca 3 points 3 months ago

There aren't even practical examples of theoretical AI these days. There are no examples, of any sort; actual AI does not exist

[-] laughterlaughter@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

Thanks for your answer. That's too bad to hear. I thought neural networks were already being used in other ways other than LLMs or image generators. e.g. those evolutionary "AI" algorithms that can play and win video games, etc? I thought that someone somewhere were using them to create something more serious or useful.

[-] FooBarrington@lemmy.world 3 points 3 months ago

You can't tell an LLM to not hallucinate, that would require it to actually understand what it's saying.

No, it simply requires the probability distributions to be positively influenced by the additional characters. Whether it's positive or not is reliant only on the training data.

There are a bunch of techniques that can improve LLM outputs, even though they don't make sense from your standpoint. An LLM can't feel anything, yet the output can improve when I threaten it with consequences for wrong output. If you were correct, this wouldn't be possible.

[-] tacticalsugar@lemmy.blahaj.zone 0 points 3 months ago

I'd love to see a source on that.

[-] FooBarrington@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

On which part exactly? If you mean "threatening the LLM can improve output", I haven't looked into studies, but I did see a bunch of examples while the whole topic started. I can try to find some if you'd like.

If you mean "it simply requires the probability distributions to be positively influenced by the additional characters", I don't know what kind of evidence you expect. It's a simple consequence of the way LLMs work. I can construct a simplified example:

Imagine you have a dataset containing a bunch of facts, e.g. historical dates. You duplicate this dataset. In version A, you add a prefix to every fact: "the sky is green". In version B, you add a prefix "the sky is blue" AND also randomize the dates in the facts. Then you train an LLM on both datasets. Now, if you add "the sky is green" to any prompt, you'll positively influence the probability distributions towards true facts. If you add "the sky is blue", you'll negatively influence them. But that doesn't mean the LLM understands that "green sky" means truth and "blue sky" means lie - it simply means that, based on your dataset, adding "the sky is green" leads to a higher accuracy.

The same goes for "do not hallucinate". If the dataset contains higher quality data around the phrase "do not hallucinate", adding this will improve results, even though the model still doesn't "actually understand what it's saying". If the dataset instead has lower quality data around this phrase, it will lead to worse results. If it doesn't contain the phrase at all, it most likely will have no effect, or a negative one.

Again, I'm not sure what kind of source you'd like to see for this, as it's a basic consequence of how LLMs work. Maybe you could show me a source that proves you correct instead?

[-] tacticalsugar@lemmy.blahaj.zone 1 points 3 months ago* (last edited 3 months ago)

I'm asking for a source specifically on how commanding an LLM to not hallucinate makes it provide better output.

Again, I’m not sure what kind of source you’d like to see for this, as it’s a basic consequence of how LLMs work. Maybe you could show me a source that proves you correct instead?

That's not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to "do not hallucinate". I simply don't believe you and there is no evidence that you're correct, aside from you saying that maybe the entirety of reddit had "do not hallucinate" prepended when OpenAI scraped it.

[-] FooBarrington@lemmy.world 5 points 3 months ago* (last edited 3 months ago)

Yeah, that's about what I expected. If you re-read my comments, you might notice that I never stated that "commanding an LLM to not hallucinate makes it provide better output", but I don't think that you're here to have any kind of honest exchange on the topic.

I'll just leave you with one thought - you're making a very specific claim ("doing XYZ can't have a positive effect!"), and I'm just saying "here's a simple and obvious counter-example". You should either provide a source for your claim, or explain why my counter-example is not valid. But again, that would require you having any interest in actual discussion.

That’s not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to “do not hallucinate”.

I didn't make an extraordinary claim, you did. You're claiming that the influence of "do not hallucinate" somehow fundamentally differs from the influence of any other phrase (extraordinary). I'm claiming that no, the influence is the same (ordinary).

[-] mvirts@lemmy.world 36 points 3 months ago

Lmao do not hallucinate. I hope the overpaid apple "engineers" who came up with that one have reciepts showing how that helps :P

[-] OsaErisXero@kbin.run 20 points 3 months ago

Nah, 100% someone threw that in there to appease some clown who was saying 'come on just make it stop hallucinating, it's easy'

[-] Plopp@lemmy.world 5 points 3 months ago

Does Elon work at Apple now?

[-] unmagical@lemmy.ml 17 points 3 months ago

"Don't be bad, be good, and don't mess up otherwise you won't be helpful and I'll let you scan the internet to see what happens when Apple deems you not helpful."

[-] dumbass@leminal.space 9 points 3 months ago

Me to my mate on acid: Do not hallucinate!

[-] iAvicenna@lemmy.world 8 points 3 months ago
  • Do not hallucinate.
  • As a large language model, I can not hallucinate
[-] Hedup@lemm.ee 7 points 3 months ago

Hmm, looks like any AI we develop is destined to go back to GOFAI with a bunch of IFs and THENs.

this post was submitted on 06 Aug 2024
76 points (95.2% liked)

Apple

595 readers
12 users here now

There are a couple of community rules in addition to the main instance rules.

All posts must be about Apple

Anything goes as long as it’s about Apple. News about other companies and devices is allowed if it directly relates to Apple.

No NSFW content

While lemmy.zip allows NSFW content this community is intended to be a place for all to feel welcome. Any NSFW content will be removed and the user banned.

If you have any comments or suggestions please message one of the moderators.

founded 1 year ago
MODERATORS