Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Yes although, it is probably a reasonable guess at how labs would go about implementing advertising - building partnerships and preferences into the prompt. The other option would be to fine tune models to favour particular companies which could become prohibitively expensive if your ads are highly targeted.
The scenario that isn't accounted for in this paper is taking a general LLM and fine tuning it to exhibit more fair/consistent behaviour when prompted about ads/partnerships but we all know with non-deterministic systems you're just increasing the odds that the model regurgitates something more sane rather than providing any strong guarantee
Edit: another possibility would be to have a gateway/proxy layer between the LLM and the user output that rewrites the vanilla model's responses to include ads where relevant. That would prevent the need to modify the original LLM but could introduce a lot of latency though, especially if the original output is long.
I mean it's the same thing with sponsored content anywhere, right? The user assumes that the system is providing information in accordance with purposes, but the ads and sponsored results create opportunities for the platform hosting them to profit at the user's expense. AI platforms are absolutely subject to the same economic incentives for corruption as say, search engines, but I don't think they're uniquely so just because the model in question has a more humanlike UI.