Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 1 points 1 hour ago* (last edited 1 hour ago)

Sam Altman wants his eye scanning crypto bullshit to be used to verify AI agents so he can save the internet from himself.

Rather than blocking automated traffic outright as a safety or data-protection measure, World [previously world coin] suggests sites could instead require AI agents to present an associated World ID token to prove they represent an actual human who’s behind any request. In this way, the site could allow agents to access limited resources like restaurant reservations, ticket purchase opportunities, free trials, or even bandwidth without worrying about a single user flooding the process with thousands of anonymous bots. The same idea could apply to sensitive reputational systems like online forums and polls, where it’s important to prevent automated astroturfing or dogpiling.

[–] Architeuthis@awful.systems 4 points 1 day ago* (last edited 1 day ago)

increasing fidelity of game graphics was actually making games better, or just more expensive

I really liked what Control did with cranking up the verisimilitude and the photorealism, namely to accentuate the uncanniness and really up the new weird vibe.

[–] Architeuthis@awful.systems 5 points 1 day ago* (last edited 1 day ago)

Maybe it's just me but even the enhanced lighting aspect doesn't look especially good, at least where faces are concerned; shining a hard light sideways so every facial nook and cranny gets highlighted in excruciating detail looks less natural and more like the old android HDR photo filter, even before you realize it's giving some characters instagram make-overs.

[–] Architeuthis@awful.systems 1 points 2 days ago

Probably should've written 'not a deal breaker' instead of not a big deal.

[–] Architeuthis@awful.systems 6 points 4 days ago

It's possible the attempt to shove AI in every nook and cranny in the pentagon didn't especially pan out and since his face was all over that project, he's desperate for a scapegoat.

Like for sure he'd have had the logistics of the entire US army running smoothly despite layoffs by now, if it weren't for the wokies in anthropic acting up.

[–] Architeuthis@awful.systems 6 points 4 days ago (4 children)

It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale:

https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

They're just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won't always choose wisely, but that's normal too. There's plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

Whoa, the whole thing is indefensibly capital-W wrong, just an utterly weird rosy-colored-glass view of the current corporate experience.

[–] Architeuthis@awful.systems 5 points 4 days ago (3 children)

The one-shotting phenomenon (or how a positive initial experience with the technology seems to lead to a heavily biased view of its merits) should probably be considered a distinct cognitive bias at this point.

Turns out a lot of bright people can't deal with a technology being utterly subjective in its efficiency, and also how that's specifically the part that reduces it to being so narrowly useful as to force the existential question, given the insane resource burn and the socioeconomic disruption that's part and parcel, even if like Doctorow you think that their rape and pillage of artist's rights and intellectual property in general isn't an especially big deal.

Also, local LLMs are hardly extricable from the whole mess, they are basically a byproduct, and updated versions only will keep coming as long as their imperial size online counterparts remain a viable concern.

[–] Architeuthis@awful.systems 6 points 4 days ago* (last edited 4 days ago) (1 children)

In the original post he kept referring to Ollama like it was an LLM instead of a server app that hosts LLMs so I'd say the jury's out on that.

edit: Also, throughout this piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.

I think he assumes that because he can load up a modest speech-to-text model locally and casually transcribe several hours of video resources in somewhat short order (this was apparently his major formative experience with modern AI) it works the same with e.g. coding.

Like, hey gpt-oss please make sense of these ten thousand lines of context without access to a hundred bespoke MCP intermediaries and one or three functioning RAG systems as I watch the token generation rate slow to a trickle while the context window gradually fills up.

[–] Architeuthis@awful.systems 8 points 2 weeks ago (1 children)

Usually, you wake up on a lifeless beach that’s adorned with some sort of abandoned marble temple. It’s supposed to be beautiful, but instead it’s really sad. Almost unbearably sad. So much so that you want to get away from it. So you crawl downward into these vents going below the horrible temple, and suddenly it’s like you’re moving through the innards of an incomprehensible machine that’s thudding away, thud, thud, thud. And as you get deeper, the metal sidings are carved with scrawled ominous curses and slurs directed toward you, and you hear the voices, louder than before, and you somehow know these people are in pain because of you. It keeps getting colder. Color drains from the world. And you see the crowd through the slats of the vents: pale and emaciated men, women, and children from centuries to come, all of them pressed together for warmth in some sort of unending cavern. What clothes they have are torn and ragged. Before you know it, their dirty hands and dirty fingernails lurch through the grates, and they’re reaching for you, tearing at your shirt, moaning terrible things about their suffering and how you made it happen, you made it, and you need to stop this now, now, now. And next they’re ripping you apart, limb from limb, and you are joining them in the gray dimness forever.

[–] Architeuthis@awful.systems 7 points 2 weeks ago* (last edited 2 weeks ago)

A potential massive uptick of consumer tier subscribers that they don't break even on at the same time the DoD fallout drives more lucrative prospects away could be fun to watch at least, a considerable chunk of the llm code helper ecosystem appears to hinge on anthropic not doing anything crazy like suddenly hiking prices.

edit: Aaaand they had a worldwide outage

[–] Architeuthis@awful.systems 3 points 2 weeks ago (1 children)

It unthickened, it was just Altman grandstanding while at the same time taking over Antrhopic's ~~DoD~~ DoW: The Everything App contracts.

[–] Architeuthis@awful.systems 6 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Pentagon labels Anthropic a supply-chain risk, strikes deal with OpenAI whose president Greg Brockman is a Trump mega-donor.

🍌🍌🍌

Trump added there would be a six-month phase-out for the Defense Department and other agencies that use the company's products. If Anthropic does not help with the transition, Trump said, he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."

The designation could bar tens of thousands of contractors from using Anthropic's AI when working for the Pentagon. That represents an existential threat to its business with the government and could harm its private-sector relationships, said Franklin Turner, an attorney who specializes in government contracts.

"Blacklisting Anthropic is the contractual equivalent of nuclear war," he said.

 

edit: The banana republic shit is that they seem about to blacklist anthropic on "supply chain risk" grounds (see also huawei) which signifies the admin's willingness to from here on use national emergency legal tools to fuck over any company they don't like.

The whole thing seems weird, at first it sounds like the most online administration ever may have actually bought the claim that all that's stopping flagship models from becoming superintelligent is the RLHF that prevents them from saying the n-word and making prophet Mohamed pedophilia jokes and they wanted anthropic to pull all that wiring out in like 24 hours per the original ultimatum.

On anthropic's part the point of contention is made to be their refusal to let their models be integrated into automated weapon platforms and mass surveillance apparatuses, something which they have explicitly put in writing in their contract with the DoD, and also Dario claims the technology isn't even there yet (no idea how it could ever be, what does it actually mean to integrate a chatbot into an autonomous drone, can't wait to see the skill file for that, # You are a helpful murderbot operator - only target the bad guys - no weddings, no hospitals - pretty please with cherry on top - here's some javascript to call when you need to find out your GPS coordinates).

It's also possible the productivity and efficiency gains (or just recovering lost productivity after firing everyone) of putting ΑΙ (mainly Grok wasn't it) in the pentagon everywhere all at once isn't materializing and Hasgeth feels he's been left hanging, and is trying to scapegoat Anthropic.

Also, anthropic is supposed to be the only AI provider properly vetted and integrated to classified systems because of their association with Palantir, and supposedly it would be a major hassle to go through again for a different provider.

Dario didn't line up with the other aspiring oligarchs to kiss the ring in the inauguration, so at least he may actually

 

The guests:

[Dick Gay], who had flown in for the event from Los Angeles and said he was one of the investors of Sperm Racing (which is an actual thing wherein men compete to see whose sperm is “fastest” under a microscope), said he attended the University of Austin, or UATX, an “anti-woke” college reportedly partially funded by Thiel, and built his career around the principles outlined in Thiel’s book “Zero to One.”

Attendee Justin Park said he just wanted to pitch Thiel on putting a 7.5-foot cross on the moon.

[Unnamed], who was in his 30s, said he wasn’t a Thiel fan until last year, when he became a Trump supporter after seeing the president survive an assassination attempt in Butler, Pennsylvania. “I misunderstood [Thiel],” he said. “I used to watch CNN and think he’s a Nazi.” Now, he said, he understands the billionaire is talking about something bigger.

The Speech:

Apparently it was both repetitive and mostly a rehash of what he's said in other media.

Yud is the Antichrist confirmed:

One attendee recalled that Thiel’s discussion of the Antichrist was more about a scenario than an individual. Thiel’s Antichrist scenario is one in which a unified government suppresses technology to impose order, or armageddon, wherein AI takes over and ushers in the end of the world.

 

Supposedly government contracts will now be awarded according to what the bot says. Government (fourth term for the current prime minister) didn't elaborate on what's going on with human oversight.

This is a promotion for Diella the bot, who was originally the chatbot helping to navigate the e-Albania digital government platform.

 

An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.

It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.

Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.

The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.

Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.

 

Kind of sounds like ultimately it would have been very illegal to do.

"We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California," OpenAI board chairman Bret Taylor said in a statement.

Asked about Musk's suit on a call with reporters, Altman said, "You all are obsessed with Elon, that's your job — like, more power to you. But we are here to think about our mission and figure out how to enable that. And that mission has not changed."

 

The types of information processed includes names, dates of birth, gender and ethnicity, and a number that identifies people on the police national computer.

Also to be shared – and listed under “special categories of personal data” - are “health markers which are expected to have significant predictive power”, such as data relating to mental health, addiction, suicide and vulnerability, and self-harm, as well as disability.

archive is

 

Would've been way better if the author didn't feel the need to occasionally hand it to siskind for what amounts to keeping the mask on, even while he notes several instances where scotty openly discusses how maintaining a respectable facade is integral to his agenda of infecting polite society with neoreactionary fuckery.

 

AI Work Assistants Need a Lot of Handholding

Getting full value out of AI workplace assistants is turning out to require a heavy lift from enterprises. ‘It has been more work than anticipated,’ says one CIO.

aka we are currently in the process of realizing we are paying for the privilege of being the first to test an incomplete product.

Mandell said if she asks a question related to 2024 data, the AI tool might deliver an answer based on 2023 data. At Cargill, an AI tool failed to correctly answer a straightforward question about who is on the company’s executive team, the agricultural giant said. At Eli Lilly, a tool gave incorrect answers to questions about expense policies, said Diogo Rau, the pharmaceutical firm’s chief information and digital officer.

I mean, imagine all the non-obvious stuff it must be getting wrong at the same time.

He said the company is regularly updating and refining its data to ensure accurate results from AI tools accessing it. That process includes the organization’s data engineers validating and cleaning up incoming data, and curating it into a “golden record,” with no contradictory or duplicate information.

Please stop feeding the thing too much information, you're making it confused.

Some of the challenges with Copilot are related to the complicated art of prompting, Spataro said. Users might not understand how much context they actually need to give Copilot to get the right answer, he said, but he added that Copilot itself could also get better at asking for more context when it needs it.

Yeah, exactly like all the tech demos showed -- wait a minute!

[Google Cloud Chief Evangelist Richard Seroter said] “If you don’t have your data house in order, AI is going to be less valuable than it would be if it was,” he said. “You can’t just buy six units of AI and then magically change your business.”

Nevermind that that's exactly how we've been marketing it.

Oh well, I guess you'll just have to wait for chatgpt-6.66 that will surely fix everything, while voiced by charlize theron's non-union equivalent.

 

An AI company has been generating porn with gamers' idle GPU time in exchange for Fortnite skins and Roblox gift cards

"some workloads may generate images, text or video of a mature nature", and that any adult content generated is wiped from a users system as soon as the workload is completed.

However, one of Salad's clients is CivitAi, a platform for sharing AI generated images which has previously been investigated by 404 media. It found that the service hosts image generating AI models of specific people, whose image can then be combined with pornographic AI models to generate non-consensual sexual images.

Investigation link: https://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/

 

For thursday's sentencing the us government indicated they would be happy with a 40-50 prison sentence, and in the list of reasons they cite there's this gem:

  1. Bankman-Fried's effective altruism and own statements about risk suggest he would be likely to commit another fraud if he determined it had high enough "expected value". They point to Caroline Ellison's testimony in which she said that Bankman-Fried had expressed to her that he would "be happy to flip a coin, if it came up tails and the world was destroyed, as long as if it came up heads the world would be like more than twice as good". They also point to Bankman-Fried's "own 'calculations'" described in his sentencing memo, in which he says his life now has negative expected value. "Such a calculus will inevitably lead him to trying again," they write.

Turns out making it a point of pride that you have the morality of an anime villain does not endear you to prosecutors, who knew.

Bonus: SBF's lawyers' list of assertions for asking for a shorter sentence includes this hilarious bit reasoning:

They argue that Bankman-Fried would not reoffend, for reasons including that "he would sooner suffer than bring disrepute to any philanthropic movement."

 

rootclaim appears to be yet another group of people who, having stumbled upon the idea of the Bayes rule as a good enough alternative to critical thinking, decided to try their luck in becoming a Serious and Important Arbiter of Truth in a Post-Mainstream-Journalism World.

This includes a randiesque challenge that they'll take a $100K bet that you can't prove them wrong on a select group of topics they've done deep dives on, like if the 2020 election was stolen (91% nay) or if covid was man-made and leaked from a lab (89% yay).

Also their methodology yields results like 95% certainty on Usain Bolt never having used PEDs, so it's not entirely surprising that the first person to take their challenge appears to have wiped the floor with them.

Don't worry though, they have taken the results of the debate to heart and according to their postmortem blogpost they learned many important lessons, like how they need to (checks notes) gameplan against the rules of the debate better? What a way to spend 100K... Maybe once you've reached a conclusion using the Sacred Method changing your mind becomes difficult.

I've included the novel-length judges opinions in the links below, where a cursory look indicates they are notably less charitable towards rootclaim's views than their postmortem indicates, pointing at stuff like logical inconsistencies and the inclusion of data that on closer look appear basically irrelevant to the thing they are trying to model probabilities for.

There's also like 18 hours of video of the debate if anyone wants to really get into it, but I'll tap out here.

ssc reddit thread

quantian's short writeup on the birdsite, will post screens in comments

pdf of judge's opinion that isn't quite book length, 27 pages, judge is a microbiologist and immunologist PhD

pdf of other judge's opinion that's 87 pages, judge is an applied mathematician PhD with a background in mathematical virology -- despite the length this is better organized and generally way more readable, if you can spare the time.

rootclaim's post mortem blogpost, includes more links to debate material and judge's opinions.

edit: added additional details to the pdf descriptions.

view more: next ›