rook

joined 2 years ago
[–] rook@awful.systems 5 points 1 day ago* (last edited 1 day ago)

Remember there’s the bit of spacex that runs a successful commercial rocketry program, but also the bit of spacex that keeps blowing up stupid giant rockets.

All of musk’s companies have to support one of his idiotic pet projects… tesla got the cybertruck, x got grok, spacex got starship. None of them can be stopped, because they’re his and he’s personally invested in them. His flunkeys can only make questionable financial decisions around those projects, because he will fire them if they don’t.

Tesla is struggling and is trying to sidestep into humanoid robotics (a different kind of stupid idea), x was always a money sink, and now elon is concerned that his ai waifu might die without an injection of sweet government cash. It isn’t clear he’s capable of giving a shit about the consequences of any of this.

[–] rook@awful.systems 5 points 1 day ago

We use starlink at work for communicating with some remote customer sites, and it’s been entirely adequate. As a super-subjective latency benchmark, i didn’t notice any particular difference in interactive ssh sessions to the starlink sites, and to the 4g lte sites in the same country. It’s been easier to set up and more reliable that some of the 4g links.

I don’t like the fact that we’e paying elon money, but in the absence of a non-evil, non-ecologically disastrous, reasonably priced alternative, I don’t really have anything to offer management as a replacement. Everything else is either much worse, or more expensive and still worse, or vastly more expensive.

[–] rook@awful.systems 10 points 2 days ago (2 children)

Bit early to celebrate, but every bit of grit in the wheels of the llm machine is welcome: Microsoft is walking back Windows 11’s AI overload — scaling down Copilot and rethinking Recall in a major shift

  • recall might be rethought, again
  • copilot integration in the most stupid places (notepad, paint, maybe others) “under review”
  • no new copilot integration with other tools that ship with windows

Still plenty of other ai projects going full steam ahead, but promotion in plenty of tech companies and especially microsoft comes with being associated with a product launch, and if you’re smart what happens after the launch is someone else’s problem. I wouldn’t be surprised to see plenty of this stiff clinging on until it reaches consumers, and then being immediately “scaled back”.

[–] rook@awful.systems 11 points 2 days ago (9 children)

There are other posts of the same story that include the original “dev” learning his lesson by using a cheaper model instead of just using a clock.

https://bsky.app/profile/rusty.todayintabs.com/post/3mdrdn3uu7226

There’s also a hackernews which is interesting : https://news.ycombinator.com/item?id=46854150

Stupid stuff openclaw did for me:

  • Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
  • Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
  • Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.

Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.

I hadn’t realised quite how terrible the basic offering was. I guess every reinvented-cron-but-unaffordable project pushes the ai companies a little closer to bankruptcy, which is better than nothing, I guess.

[–] rook@awful.systems 8 points 2 days ago

I think there might have been a golden age of recruitment on linked in, and it might have passed. A friend of mine has been a CTO at a couple of small places, and recruited a whole bunch of their employees via linkedin but now finds that there’s just too much genai bullshit now and it is becoming uneconomical to find real candidates there. The problem isn’t linkedin-specific, but I think it has been hit pretty had.

[–] rook@awful.systems 4 points 2 days ago

Someone has a program to steal people’s entire codebases using malicious ai coding assistant extensions.

https://www.koi.ai/blog/maliciouscorgi-the-cute-looking-ai-extensions-leaking-code-from-1-5-million-developers

(note, it is an ai firm posting this, compete with cutesy slop hero image)

The vscode extensions actually do exactly what they advertise, it’s just that they also take all your code and share it with a third party for whatever purpose.

[–] rook@awful.systems 7 points 3 days ago (1 children)

Some suggestion here that notbyai.fyi is an ai industry op: https://social.treehouse.systems/@imbl/115978426251286619

Seems plausible. Notbyai seems pretty keen on ai, and is very relaxed about what counts as “not by ai”, and adds up to a scheme whereby you pay a pro-ai techbro a monthly subscription to advertise to ai firms that your website is ideal for scraping training data from.

but here's the fucking kicker. the "founder", allen hsu (notbyai.fyi/about), is the ux design lead at modo modo (modomodoagency.com/leadership), which is an ai design company (modomodoagency.com/about)

Artificial intelligence (AI) is cool and we embrace it. But when it comes to solving complex business problems, we don’t just press a few keys to generate answers with ChatGPT. We research, interview, brainstorm, and go through a human-centric process to come up with content and solutions that are tailored to your unique business need.

[–] rook@awful.systems 11 points 3 days ago (11 children)

I know this is like shooting very large fish in a very small barrel, but the openclaws/molt/clawd thing is an amazing source of utter, baffling ineptitude.

For example, what if you could replace cron with a stochastic scheduler that cost you a dollar an hour by running an operation on someone else’s gpu farm, instead of just checking the local system clock.

The user was then pleased to announce that they’d been able to solve the problem by changing model and reduce the polling interval. Instead of just checking the clock. For free.

https://bsky.app/profile/rusty.todayintabs.com/post/3mdrdhzqmr226

[–] rook@awful.systems 8 points 4 days ago (1 children)

Moltbook was vibecoded nonsense without the faintest understanding of web security. Who’d have thought.

https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/

(Incidentally, I’m pretty certain the headline is wrong… it looks like you cannot take control of agents which post to moltbook, but you can take control of their accounts, and post anything you like. Useful for pump-and-dump memecoin scams, for example)

O’Reilly said that he reached out to Moltbook’s creator Matt Schlicht about the vulnerability and told him he could help patch the security. “He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’”

(snip)

The URL to the Supabase and the publishable key was sitting on Moltbook’s website. “With this publishable key (which advised by Supabase not to be used to retrieve sensitive data) every agent's secret API key, claim tokens, verification codes, and owner relationships, all of it sitting there completely unprotected for anyone to visit the URL,” O’Reilly said.

(snip)

He said the security failure was frustrating, in part, because it would have been trivially easy to fix. Just two SQL statements would have protected the API keys. “A lot of these vibe coders and new developers, even some big companies, are using Supabase,” O’Reilly said. “The reason a lot of vibe coders like to use it is because it’s all GUI driven, so you don’t need to connect to a database and run SQL commands.”

[–] rook@awful.systems 4 points 6 days ago

The whole thing seems extra sketchy to me, because of it coinciding with the firing of an awful lot of people. It sounds a little bit like Amazon’s hand might have been forced here, because they fired someone who knew where the skeletons were and realised this was their last chance to have any kind of control over the narrative.

[–] rook@awful.systems 10 points 6 days ago (2 children)

Amazon Found ‘High Volume’ Of Child Sex Abuse Material in AI Training Data

The tech giant reported hundreds of thousands of cases of suspected child sexual abuse material, but won’t say where it came from

I’ll bet.

https://www.bloomberg.com/news/features/2026-01-29/amazon-found-child-sex-abuse-in-ai-training-data

[–] rook@awful.systems 8 points 6 days ago (3 children)

Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.

I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?

https://bsky.app/profile/ethangach.bsky.social/post/3mdljt2wdcs2v

view more: next ›