tip from a discord:
I've been doing some micro tasking to train LLM's the last few months to earn some extra cash, the last month it's all dried up, no tasks available. I can't help thinking that is a sign of a bubble deflating.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
tip from a discord:
I've been doing some micro tasking to train LLM's the last few months to earn some extra cash, the last month it's all dried up, no tasks available. I can't help thinking that is a sign of a bubble deflating.
I knew the HN reaction to Marine Le Pen being banned from being elected to public office would be unhinged, but weirdly I did not have "good, she's a woman and weak so now a strong man can lead RN to glorious victory" on my bingo card. More fool me, misogyny is always on the card.
Why yes, I'm sure 29 year old Jordan "El italiano 93" Bardella, who 10 years ago was uploading Call of Duty videos to youtube, is a much stronger candidate.
Anyone who uses gen AI is my opp, status report:
I've been browsing bluesky a fair amount recently and it's chock full of artists making non-AI Ghibli inspired art in response to the AI trend. Which is neat.
jesus fucking christ I think that IDF tweet is the worst thing that has ever existed
Taking the "insult to life itself" to a whole another level.
My University Keeps Sending Me Stupid Emails About AI, a continuing series:
From the email:
The debate will be chaired by Michael Pike. Speaking for the motion are Prof. Gregory O'Hare (TCD), Maeve Hynes, Prof. Gary Boyd and Chatgpt assisted by Student Curator Ruan McCabe, all UCD. Speaking against the motion are David Capener (UU), Lucy O'Connell, Peter Cody and Meabh O'Leary, all UCD.
OpenNutrition -- ~~a dataset~~ an LLM that allows you to play "vibe nutritionist"
https://news.ycombinator.com/item?id=43569190
First response is good quality:
This is not a dataset. This is an insult to the very idea of data. This is the most anti-scientific post I have ever seen voted to the top of HN. Truth about the world is not derived from three LLMs stacked on top of each other in a trenchcoat.
Today on the orange site, an AI bro is trying to reason through why people think he's weird for not disclosing his politics to people he's trying to be friendly with. Previously, he published a short guide on how to talk about politics, which — again, very weird, no possible explanation for this — nobody has adopted. Don't worry, he's well-read:
So far I've only read Harry Potter and The Methods of Rationality, but can say it is an excellent place to start.
The thread is mostly centered around one or two pearl-clutching conservatives who don't want their beliefs examined:
I find it astonishing that anyone would ask, ["who did you vote for?"] … In my social circle, anyway, the taboo on this question is very strong.
To which the top reply is my choice sneer:
In my friend group it's clear as day: either you voted to kill and deport other people in the friend group or you didn't. Pretty obvious the group would like to know if you're secretly interested in their demise.
Wow, farriers are going crazy with these modern horseshoe designs!
image transcription
Above: a single axis scatter plot. The ends of the axis are labeled "Left" and "Right". Data points are distributed similarly to normal distribution, the center being more concentrated than the extremes. Caption: "What people think the political spectrum is vs What it actually is".
Below: A two-dimensional scatter plot. The ends of the horizontal axis are labeled "Left" and "Right" and the vertical axis is labeled "Independent Thought" at the top and "Groupthink" at the bottom. The data points are evenly distributed inside a bell curve shape, with the peak of the hump at Independent Thought and between Left and Right, and the wide bottom of the bell spanning the whole Left-Right axis at maximum Groupthink.
Data points near the top of the bell (Independent-Center) labeled '"Un-intentional moderates" (from Paul Graham's The Two Kinds of Moderate)'. [Original image uses double quotes around the term Un-intentional moderates and single quotes around the title of Paul Graham's shitpost.]
Data points at the bottom center of the bell (Groupthink-Center) labeled "Intentional moderates".
In the bottom corner a watermark crediting the image to "@shw1nm".
I love the implication that being an independent thinker means agreeing with both the majority and other independent thinkers.
Hi, I'm an unimtentional moderate centrist! I don't have an ideology, only views! My overton window is a peephole! Kamala is a far leftist! Trump is orange! People should stop treating politics like team sports and join the non team sports tribe like me! Political correctness gone mad! Let me tell you about Chesterton's fence! Everyone's sheeple except my flock! Say it with me: I'm an independent thinker!
a journalist wants to ask me about Urbit. Should I install Urbit so I can at least say I've tried it and can speak authoritatively?
[ ] no
[ ] hell no
[ ] jesus h. christ no why
i just had a 1.5 hour half-hour chat with another journalist about the Fucking Rationalists
i have a call with a third one scheduled for tomorrow
i have followed these fucks for 15 years in detail and i still don't know how to properly explain them to our lovely pivot-to-ai readers
at least it'll be these suckers' job to do the explaining while i rant
fuck. it's rationalist season
@dgerard @BlueMonday1984 I've been following them for 35 years—hell, I hung out with Curtis Yarvin on usenet in the early 90s, visited him and fondled his lizard once in '93—and the only way I can explain them is that they're larping a bad post-cyberpunk novel and aren't entirely clear on the whole concept of "fiction".
Check out the unhinged classism from one of the lesser figures who's popped up here from time-to-time (with an added bonus shout-out to Ayn Rand further down the thread)
Jesus, what a fuck. Having spent time in both SF and NYC my guess is that this shithead's SF pedestrian experience is getting in and out of ubers and the treadmills at equinox. That, and he is probably outwardly disdainful, which doesn't go over well in NYC.
Wow, that's some venomously hateful text.
new york city truly is a first and fourth word city overlayed atop each other
Jesus Christ learn what words mean. Even if you use "Third World" to mean "poor countries", fourth world is not a thing and people living in extreme poverty in developing countries are in fact not better off than non Wall Street New Yorkers.
if you, like me, were wondering what the point of that 25 hour non-filibuster filibuster by Booker was, here’s one potential answer.
Booker held a filibuster that wasn't a filibuster - and he scheduled it to let him avoid attending his own committee's probe of his Big Tech pals. […] Oh and Booker and Dems provided unanimous consent to advance a Trump nominee right after Booker's speech.
My favorite meme message board on the internet keeps putting out bangers, but this one is extra good and relevant http://www.b3ta.com/board/11416629
Linking this article just for the Palantir CEO quote at the end https://www.wired.com/story/doge-hackathon-irs-data-palantir/
"We love disruption and whatever is good for America will be good for Americans and very good for Palantir,” Palantir CEO Alex Karp said in a February earnings call. “Disruption at the end of the day exposes things that aren't working. There will be ups and downs. This is a revolution, some people are going to get their heads cut off."
This is a revolution, some people are going to get their heads cut off
Holy smokes
there was a reddit ama with an ukrainian dude who was in charge of procurement of certain drones for ukrainian army. among these were uavs from anduril, and uaf decided not to buy them, chiefly because these drones don't work and cost something like 20-50x more than equivalent ukrainian designs that do work. the reason why they don't work is russian jamming, about which anduril people presumably were informed, yet somehow they still were complete assholes (to foreigners only) through entire process. which also means that ukrainian elint captured how russian jamming works, gave that data to anduril who proceeded to do fuck all with it, possibly leaking this intel in some signal chat
the reason why these were on the table to begin with is that (part of) american weapons aid is provided in form of money to be spent at american manufacturers, it's not pallets of cash like fox news would like you to believe. (private donations to places like united24 do work this way, but also for example denmark does provide money with no strings attached, at least not these). (yea i'm effectiving my weaponized altruism, cranking up these numbers of dead russians per thousand euros). reasons for that are slight wage discrepancy between poltava and berkeley, compounded by scale (how many drones does anduril make per year, 20? ukrainians make low millions, probably with thousands to hundreds of thousands per type). most of costs in drone development is driven by software. doesn't help that thiel's people are swamped in vc money, therefore they don't have to work efficiently, neither have their drones, or even they don't need to work at all. ukrainians don't have the luxury to blow millions into ai swarming that doesn't work and instead put that effort into signal processing to avoid jamming, which does work, which they can check pretty quickly too.
this is the disruption they're cooking. invest in eastern europe or something
NaNoWriMo? Na, No Mo'. Does this have anything to do with their bungled AI policy? Maybe, maybe not, but hey, the news article that I saw this announcement in thought that it was pertinent to mention.
Why does NaNoWriMo need a nonprofit anyway? What's next, No Nut November LLC?
As funny as that is, I am sure that there are nonprofits that are aiming to stop fapping.
There's a fucking for profit that does this and it's used by the god damn USA Speaker of the House so that his son can monitor his fondling
I am fairly certain that their stupid AI stance at least played a role. I’m part of a largish writing community, where at least several hundred people did NaNo each year, a good part of them donated too. Last year, no one partook, instead we did our own internal thing.
If this happened across other communities too, and I wouldn’t be surprised if it did, they must have felt it financially.
considering their endless history of scandal, good riddance
An AI faceswapper/nudifier's database got leaked thanks to its nonexistent security - unsurprisingly, its loaded with explicit images, including massive amounts child porn and almost certainly some revenge porn.
WIRED tried reaching out to the company behind the "imagery", but they nuked everything and closed their doors in response.
A recent chapter of one of the official Touhou Project manga had a jab at crypto schemes. This isn't one of the works of art I expected to so explicitly dunk on these unscrupulous scams, but I welcome it all the more for that.
"So, [Elsevier] added an AI question and answer to my article in Computer & Education.
The answers are wrong! They did not ask permission! The answers are WRONG!
How is this science?!"
In other news, the Guardian landed an exclusive scoop on cuts to "AI cancer tech funding in England". Baldur Bjarnason's given his commentary:
Turns out rebranding even the genuinely useful Machine Learning as “AI” doesn’t help them get funding. The only beneficiaries of the bubble seem to be volatile media synthesis engines
You want my opinion, future machine learning research is probably gonna struggle to get funding once the bubble bursts, both due to the "AI" stench rubbing off on the field, and due to gen-AI sucking up all of the funding that would've gone towards actually useful shit. (Arguably, its already struggling even before the bubble's burst.)
Came across this fuckin disaster on Ye Olde LinkedIn by 'Caroline Jeanmaire at AI Governance at The Future Society'
"I've just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027. Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it's a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.
What makes this forecast exceptionally credible:
One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed
The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio
It makes concrete, testable predictions rather than vague statements that cannot be evaluated
The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.
As the authors state: "It would be a grave mistake to dismiss this as mere hype."
For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."
Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let's at least take a look inside for some of their deep quantitative reasoning...
....hmmmm....
O_O
The answer may surprise you!
SMBC using the ratsphere as comics fodder, part the manyeth:
transcription
Retrofuturistic Looking Ghost: SCROOOOOGE! I am the ghost of christmas extreme future! Why! Why did you not find a way to indicate to humans 400 generation from now where toxic waste was storrrrrrrred! Look how Tiny Tim's cyborg descendant has to make costrly RNA repaaaaaaairs!
Byline: The Longtermist version of A Christmas Carol is way better.
bonus
spoiler transcription Scrooge: I tried, but no, no, I just don't give a shit. :::
Your mistake, distant future ghost, was in developing RNA repair nanites without creating universal healthcare.
Update on The Shadiversity Drama^tm^: he's still malding about being an utterly soulless waste of oxygen:
Now, some of you may be wondering "Monday, how is that AI-generated? That piece actually has a soul!" Well, as it turns out, it wasn't AI - Shad quite literally stole someone's artwork and passed it off as AI.