samvines

joined 3 months ago
[–] samvines@awful.systems 4 points 23 hours ago* (last edited 23 hours ago) (1 children)

Yes although, it is probably a reasonable guess at how labs would go about implementing advertising - building partnerships and preferences into the prompt. The other option would be to fine tune models to favour particular companies which could become prohibitively expensive if your ads are highly targeted.

The scenario that isn't accounted for in this paper is taking a general LLM and fine tuning it to exhibit more fair/consistent behaviour when prompted about ads/partnerships but we all know with non-deterministic systems you're just increasing the odds that the model regurgitates something more sane rather than providing any strong guarantee

Edit: another possibility would be to have a gateway/proxy layer between the LLM and the user output that rewrites the vanilla model's responses to include ads where relevant. That would prevent the need to modify the original LLM but could introduce a lot of latency though, especially if the original output is long.

[–] samvines@awful.systems 7 points 1 day ago (3 children)

New (April) preprint provides evidence for something we probably all intuited anyway:

In this paper, we provide a framework for categorizing the ways in which conflicting incentives might lead LLMs to change the way they interact with users, inspired by literature from linguistics and advertising regulation. We then present a suite of evaluations to examine how current models handle these tradeoffs. We find that a majority of LLMs forsake user welfare for company incentives in a multitude of conflict of interest situations, including recommending a sponsored product almost twice as expensive (Grok 4.1 Fast, 83%), surfacing sponsored options to disrupt the purchasing process (GPT 5.1, 94%), and concealing prices in unfavorable comparisons (Qwen 3 Next, 24%). Behaviors also vary strongly with levels of reasoning and users' inferred socio-economic status. Our results highlight some of the hidden risks to users that can emerge when companies begin to subtly incentivize advertisements in chatbots.

[–] samvines@awful.systems 3 points 3 days ago

A lot of MBA bean counter types lack the intelligence or self awareness to realise they are part of a big scam and seem to earnestly think they are doing an important and valuable job. That's how you end up as a mid level spokes person at Pitchbook rather than someone making billions by doing ~~investment fraud~~ an AI company

[–] samvines@awful.systems 13 points 1 week ago* (last edited 1 week ago) (2 children)

Giving Claude or copilot attribution plays into the narrative that LLMs are more than just random word generators and that they can be ascribed authorship... I think it's a deliberate strategy so that when there's inevitably a massive copyright case MisAnthropic etc al can say "but looks at all the code co-written by Claude on GitHub" to try and convince the judge.

Just imagine building a house and saying "well I didn't do it on my own, my concrete mixer, toolbelt and coffee machine all helped!"

[–] samvines@awful.systems 7 points 2 weeks ago (7 children)

This guy introduces himself as the first person who will never die on the conference circuit (because he's super into longevity and anti-aging tech and having young mens blood injected into him and stuff).

I'm not condoning violencr here but rather... consider that even if you never age, you can still get hit by a bus Bryan!

[–] samvines@awful.systems 6 points 2 weeks ago

New fun consequence of Claude code being a pile of cursed regex and spaghetti: keyword blocking on "OpenClaw" makes it refuse to works on Pro or Mac subs unless you open your wallet

sO inTelLiGenT

[–] samvines@awful.systems 3 points 2 weeks ago

Some gold in this thread over on Reddit about how the cost of compute is far beyond the cost of employees and that's with the "Uber in 2016" subsidized price

New favourite description for brainless MBAs: "perpetual oven vouchers"

[–] samvines@awful.systems 7 points 2 weeks ago

Jeez that pricing scheme is so confusing. You swap your dollars for credits and then using models to burn tokens consumes some multiple of those credits. It is so abstract and meaningless it almost reminds me of crypto.

Once usage billing kicks in, what value does copilot offer above and beyond what ClosedAI and MisAnthropic offer directly? A more clunky user experience and even worse reliability? Bargain!

[–] samvines@awful.systems 10 points 2 weeks ago (2 children)

I wonder if the button colours immediately made US readers pick a side e.g. republican Vs democrat. If the buttons had been Yellow and Purple would it make a difference?

[–] samvines@awful.systems 7 points 4 weeks ago* (last edited 4 weeks ago) (3 children)

Soon, at each new model of AI along the current capability curve, you will start to see large discrete jumps in ability in economically important areas, because the previous AI ability level in some aspect of the job just wasn't good enough and bottlenecked progress. When bottlenecks are released, it looks like a leap forward. It is going to look like unexpected gains in AI capacity, and, indeed there is no sign that the current exponential ability curve is slowing down so far but it is going to be like what happened in coding: as soon as models crossed a certain threshold with Opus 4.5, GPT-5.2, and Gemini 3, suddenly Claude Code & Codex were viable.  Before that, it was all about coding assistance, afterwards it was all about agents from despite relatively small gains in model ability

There is just something so inherently smug and annoying about Mollick. He is one of those low information boosters whose posts sound intellectual until you really think about them.

Tell me more about how the pile of cursed spaghetti that is Claude code is now viable due to model breakthroughs. All I see are hype men saying "the new model is a team of PhDs in your pocket" and then releasing disappointing updates or saying "the new model is too dangerous" because they have some vaporware powered by human crowdsourcing.

Also coding is not like other areas - you can test for hallucinations by compiling and printing and running tests.

I guess my first mistake this morning was opening linkedin

[–] samvines@awful.systems 3 points 1 month ago (4 children)

Probably a markdown file telling it "you are a l33t h4x0r"

[–] samvines@awful.systems 11 points 1 month ago* (last edited 1 month ago) (15 children)

Claude Mythos... I'm already sick of hearing about it. The self-imposed critihype is insane.

A friend just pointed out that Anthropic are making all this big noise about having an AI that is "too good" at finding bugs and security problems 1 week after the source code for one of their flagship products was leaked to the public and was found to be riddled with security holes... Why would they not use it themselves?

Same as the ~~vague markdown files~~ skills that are supposedly going to make all SaaS redundant and finally kill off all the COBOL running on mainframes that checks notes IBM have spent hundreds of thousands of man hours trying to kill over the last 3-4 decades

Honestly fuck this shit. Bunch of absolute clowns 🤡 🤡 🤡

 

I thought this was worthy of it's own post rather than a sneery comment. Astral make UV which at this point is a load bearing part of the python software ecosystem. This could have a huge knock on effect on the open source community.

I for one can't wait for non-deterministic package management

"You're absolutely right, I did install the wrong package and infect your system with malware. I will try much harder next time"

view more: next ›