OT: Just gave my two weeks notice and it turns out management is very big on using ChatGPT…
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
A 2025 UBC master's thesis on our friends' ideas and their literary antecedents https://dx.doi.org/10.14288/1.0449985 The supervisor was born around the time that Elron Hubbard, Jack Parsons, RAH, and their wives and lovers were having a chaotic transition to the postwar world.
I was getting excited to read this but seeing the word "hyperstition" used three times in the abstract put a bit of a damper on things hahah
I like the quote by John Swartzwelder in chapter 1.
AI Singularity Fantasies : Tracing Mythinformation from Erewhon to Spiritual Machines
That title is a banger
Ugh, I'm so fucking tired of this shit.
I can imagine that an LLM can find bugs. Bugs often follow common patterns, and if anything, an LLM is a pattern matcher, so if you let it run on the whole world of open source code out there, I'm sure it'll find some stuff, and some of it might be legit issues.
But static code analysis tools have been finding bugs for decades, too. And now that an AI slop machine does it, it's supposed to bring about dystopian sci-fi alien wars?
Why are people hyped about that?
(Also this poster makes wrong claims about every exploit being worth millions and such, but the rest of it is so much more ridiculous, it drowns out the wrongness of those claims.)
til that youtube now features "posts"
....sigh
Going to youtube for the posts is the perfect inverse of reading playboy for the articles.
community posts have been a thing for like, two years now? three?
A machine learning researcher points out how the field has become enshittified. Everything is about publications, beating benchmarks, and social media. LLM use in papers, LLM use in reviews, LLM use in meta-reviews. Nobody cares about the meaning of the actual research anymore.
I like this reply on Reddit:
I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.
I see maybe a solution, or at least help, in closer research-business collaboration. Companies don't care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, I've seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.
This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoff's economic paper with the Excel error).
Candidate for one of the PR threads of all time
In brief: OpenClaw bot sends PR to the matplotlib repo posing as a human, gets found out and is told to piss off in the politest terms imaginable, then gets passive aggressive to the point of publishing a pissy blog post about getting discriminated against. Some impoliteness ensues.
Crinve warning: thread may include some overt anthropomorphizing of text synthesizers.
I regret to inform y'all that the target of the blog post is a rat, or at least rat-adjacent
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.
I think there’s a lot to say about the object level issue of how to deal with AI agents in open source projects, and the future of building in public at all.
object level issue
Makes sense, given the embarrassing lengths he went to not hurt the bot's feelings in that thread.
One of the few benefits of AI is that nowadays some PR threads are very entertaining to read.
Great news everybody! Copilot will no longer delete your files when you ask it to document them and it took only 6 months to vibe code a solution.
Rutger Bregman admits that he’s not sure what AGI actually is beyond vague utopian visions, but trivial questions aside, he’s sure it will revolutionize the world in 10 years.
For those who haven’t heard of him, he’s a Dutch historian who achieved some fame for his book arguing for UBI and reduced work weeks, as well as his critique of rich people avoiding taxes and a segment on Tucker Carlson’s show where he openly challenged his politics. He has since seemingly turned 180 degrees and become a billionaire-backed effective altruist.
Yeah he is trying to build his own EA movement. He also wrote a book (which I have not read) which basically argues that people in general are good not evil actually. (Fair enough, but not relevant).
Im still trying to meet him and shake is hand, the resulting matter antimatter explosion will take out the country.
but I do know that what's available now is just f*cking impressive - and it will only get better.
Another victim of the proof-by-dopamine-hit fallacy it seems.
It's telling that the example he brings is that Claude can do pretty much decently what he was about to buy a 100$ voice controlled app for. As someone who aspires to the art of making great software, it's so infuriating to see how non-techies were conditioned into accepting slopware by years of enshittification and price gouging. Who cares if the tech barely works right? So does most anything, right?
@TinyTimmyTokyo @BlueMonday1984
The Romans answered that one : live donkey.
"Best" for what purposes, and for whom?
Fuck, I was just about to post that. You beat me to the sneer
chatbots doing normal chatbot things
EDIT:
I'm removing the image (keeping the original text for posterity), but I just completely got had by someone straight up lying.
It's quite embarrasing, I should've been way more skeptical of someone posting an image without sourcing the original paper. Turns out not only is it not a recent paper at all (published June 2025), not only is that table not saying what he claims it's saying, but the authors have since removed that table altogether from revised versions of the paper!
That's what you get from reposting someone who has "The Finance Newsletter" in his fucking username, couldn't have gone well for me.
original post
From https://bsky.app/profile/thefinancenewsletter.com/post/3mek7wsqgkk26
Microsoft released a study showing the 40 jobs most at risk by AI:
Tag the most ridiculous entry, I am curious of your choices.
To me it has to be fucking historians. Arriving at new conclusions by looking at available evidence and/or finding obscure references that are not well known to the public -- CLASSIC THING LLMS ARE GOOD AT.
Edited the post after it came to my attention I got duped, I got had, I got bamboozled by a liar
Don't feel bad, it's gonna be harder and harder to avoid being duped in the future.
But unlike those that have fallen to hubris I am built different and should be immune to disinformation!
@V0ldek Mathematicians.
Tell me you have no idea what mathematicians do by publishing an absolute mockery of mathematics purporting to explain that mathematicians are likely to be replaced by LLMs.
before you order the cavalry charge, fwiw this skeet misrepresents the actual study topic rather badly, as another bluesky commenter notes.
"these ai girls with 3 boobs really puts strain on the fashion model industry"
CNC Tool Programmer is a good one and shows that Microsoft, a company that probably has paid for someone to run CNC tooling for prototyping AND supposedly makes software, didn't do the bare minimum to understand complexeties involved by talking to that someone.
Yeah, you can make mistakes with programming this thing, it'll happily destroy hundreds of thousands of dollars in tooling as well as potentially maiming or killing anyone standing too close while the machine is actually physically crashing. It will friction-weld your nice, expensive carbide cutting tool with cooling channels to your work piece (even if they are dissimler metals) by taking too big of a cut because it does exactly as it's instructed.
Even if you've never heard of him before and know nothing else about him... this short tweet alone tells so much about what kind of person he is.
Interesting first job your mind goes to there Yud. Might spend a little bit less time around people who regularly use the word goon but who never talk about the mob.
