It was floated last year, and its happened today - Curl is euthanising its bug bounty program, and AI is nigh-certainly why.
BlueMonday1984
Simon Willison defends stealing a Python library using lying machines, answering "questions" he previously "asked" in an attempt to downplay his actions.
Found a solid sneer on the 'net today: https://chronicles.mad-scientist.club/tales/on-floss-and-training-llms/
A small list of literary promptfondlers came to my attention - should complement the awful.systems slopware list nicely.
In a frankly hilarious turn of events, an award-winning isekai novel had its planned book publication and manga adaptation shitcanned after it was discovered to be AI slop.
The offending isekai is still up on AlphaPolis (where it originally won said award) as of this writing. Given its isekai and AI slop, expect some primo garbage.
Anyway, I can recommend skipping this episode and only bothering with the technical or more business oriented ones, which are often pretty good.
AI puffery is easy for anyone to see through. If they're regularly mistaking for something of actual substance, their technical/business sense is likely worthless, too.
Found someone showing some well-founded concern over the state of programming, and decided to share it before heading off to bed:

alt text:
Is anyone else experience this thing where your fellow senior engineers seem to be lobotomised by AI?
I've had 4 different senior engineers in the last week come up with absolutely insane changes or code, that they were instructed to do by AI. Things that if you used your brain for a few minutes you should realise just don't work.
They also rarely can explain why they make these changes or what the code actually does.
I feel like I'm absolutely going insane, and it also makes me not able to trust anyones answers or analysis' because I /know/ there is a high chance they should asked AI and wrote it off as their own.
I think the effect AI has had on our industry's knowledge is really significant, and it's honestly very scary.
The OpenAI Psychosis Suicide Machine now has a medical spinoff, which automates HIPAA violations so OpenAI can commit medical malpractice more efficiently
I'd personally just overthrow the US government and make it a British colony once more /j
so now there’s even less accountability than before
How can you get less than zero accountability?
I'm so sorry to hear that.
Like the bottle opener on a Galil, or the "Flower Field" version of Minesweeper, it can also help distinguish your story in small, but interesting ways, and help it stick in a reader's mind. (Had to try this trick for myself :P)