towerful

joined 2 years ago
[–] towerful@programming.dev 1 points 3 days ago

I haven't experienced "2 or 3 prompts later" regression.
I have found asking it to queue changes until I ask for it to work on the queue.
Maybe ask it to produce a single file for review, or tell it how to modify a file (and why, it likes an explanation).
But always stack up changes, ask it to review it's queue of changes etc.
Then ask it to do it in a one-er.
Although, this is the first time claude said such a request will take a long time (instead of showing it's working/thinking and doing it is 20 minutes).
Maybe this is when it starts forgetting why it did things.

[–] towerful@programming.dev 12 points 3 days ago (5 children)

Probably not relevant to the article, I had to rant. I'm drunk, and suffering!

I'm trying the old vibe coding, except with actual specs. I feel like I have to. I hate it.

I think refining the spec/prompt with Claude makes sense. I found it helped me crystallise my spec and highlight gaps & pitfalls.
At which point, I should've just coded it.
I'd have known what it does, and it would be exactly what I needed.
But I figured I'd see what Claude could do.

So, my "dev->staging->prod" (project isn't in production state yet, thought it would be good to try AI on something) database migration system with a planning, apply and rollback stage was built by Claude.
There are system tables that should migrate fully (but allow for review if they are structurally different) and there are data tables that should only alter schema (not affect data). It's decently complex that it would take me a week or so to write and generate, but maybe I can spend a day or 2 writing a spec and seeing what clause can do.

It wanted to use python, and told me that migra is outdated and tried to generate something that would do it all.
I told it to use results (the migra replacement), and after convincing it that results was the actual library name and that it can produce schema differences (and telling it that it is a different API than migra cause it tried to use it as if it was migra, and.... So much wasted time!), I finally got working code. And all the logs and CLI etc resulted in SUCCESS messages.
Except that tables are named like "helloThere" were ignored by it, cause it hadn't considered tables might have uppercase. So I got it to fix that. And it's working code.

It looks nicely complex with sensible file names.
Looking at the code: there are no single responsibilities, no extensibility. It's actually a fucking mess. Variables sent all over the place, things that should be in the current command context being randomly generated, config hard coded, randomly importing a function from another file (and literally the only place that other function is used) because.... I don't know.
It's just a bunch functions that does stuff, named be impressive, in files that are named impressively (ignoring the content). And maybe there are context related functions in the same file, or maybe there are "just does something that sounds similar" functions.

The logging?
Swallows actual errors, and gives an expected error messaged. I just want actual errors!

It's hard to analyse the code. It's not that it doesn't make sense from a single entry point. It's more that "what does this function do" doesn't make sense in isolation.
"Where else might this be a problem" has to go to Claude, cause like fuck could I find it it's probably in a functionally similar function with a slightly different name and parameters or some bullshit.

If I didn't know better, and looked at similar GitHub projects ... Yeh, it seems appropriate.

It is absolutely "manager pleasing complexity".
But it does work, after telling it how to fix basic library issues.

Now that it works, I'm getting Claude to refactor it into something hopefully more "make sure functions are relevant to the class they are in" kinda thing. I have low expectations

I don't EVER want to have to maintain or extend Claude generated code.
I have felt that all the way through this experiment.
It looks right. It might actually work. But it isn't maintainable.
I'm gonna try and get it to be maintainable. There has to be a way.
Maybe my initial 4-page spec accidentally said "then randomise function location".

I'm gonna try Claude for other bits and pieces.
Maybe I'll draw some inspiration from this migration project that Claude wrote (if I can find all the bits) and refactor it into something maintainable (now that I have reference implementations that seems to work, no matter how convolutedly spread they are)

[–] towerful@programming.dev 5 points 4 days ago

"and early exit polling shows that 100% of democrats have been arrested, 100% of 3rd party voters have been deported to Guantanamo bay, and 100% of white republicans have received a 50% tax increase. Mail in voting from the ruling class has seen a 100% turnout for republicans with a 20% tax cut!"

Such wins for amerika

[–] towerful@programming.dev 92 points 1 week ago (3 children)

If the snowman was build on the road, the driver is at fault for driving carelessly, not paying attention.
Nobody else was hurt. Nobody else's property was damaged. There is no one to be held liable.

This guy drove into a snowman, regardless of where it was.
A static object that only moves in Christmas music.

If it was a snowbank, same deal.
If it was a parked car, same deal.
If it was a fallen telephone/power pole, same deal.
If it was a pile of cinderblocks that fell off the back of a truck, same deal.

The guy either wasn't paying attention, or was being an asshole.
Either way, driving carelessly. Asshole is at fault

[–] towerful@programming.dev 1 points 1 week ago (1 children)
[–] towerful@programming.dev 4 points 1 week ago

BES is amazing. Absolutely fantastic

[–] towerful@programming.dev 27 points 1 week ago (2 children)

Scott Manley has a video on this:
https://youtu.be/DCto6UkBJoI

My takeaway is that it isn't unfeasible. We already have satellites that do a couple kilowatts, so a cluster of them might make sense. In isolation, it makes sense.
But there is launch cost, and the fact that de-orbiting/de-commissioning is a write-off, and the fact that preferred orbits (lots of sun) will very quickly become unavailable.
So there is kinda a graph where you get the preferred orbit, your efficiency is good enough, your launch costs are low enough.
But it's junk.
It's literally investing in junk.
There is no way this is a legitimate investment.

It has a finite life, regardless of how you stretch your tech. At some point, it can't stay in orbit.
It's AI. There is no way humans are in a position to lock in 4 years of hardware.
It's satellites. There are so many factors outside of our control that (beyond launch orbit success), that there is a massive failure rate.
It's rockets. They are controlled explosives with 1 shot to get it right. Again, massive failure rate.

It just doesn't make sense.
It's feasible. I'm sure humanity would learn a lot. AI is not a good use of kilowatts of power in space. AI is not a good use of the finite resource of earth to launch satellites (never mind a million?!). AI is not a good reason to pullute the "good" bits of LEO

[–] towerful@programming.dev 17 points 1 week ago

I've experienced reciting the pledge of allegiance and the lords prayer.
It's all indoctrination

[–] towerful@programming.dev 22 points 1 week ago (1 children)

Yeh, do: 60fps, 30 bit color... and I guess HDR?
Do things that people can actually appreciate.
And do them in the way that utilises the new tech. 60fps looks completely different from 24fps... Work with that, it's a new media format. Express your talent

[–] towerful@programming.dev 2 points 1 week ago* (last edited 1 week ago)

The retirement of Ingress NGINX was announced for March 2026, after years of public warnings that the project was in dire need of contributors and maintainers. There will be no more releases for bug fixes, security patches, or any updates of any kind after the project is retired.
This cannot be ignored, brushed off, or left until the last minute to address. We cannot overstate the severity of this situation or the importance of beginning migration to alternatives like Gateway API or one of the many third-party Ingress controllers immediately.

I know it's literally the first paragraph, but I thought it worth commenting for those that only read the title & comments

[–] towerful@programming.dev 3 points 1 week ago

I'd take each of your metrics and multiply it by 10, and then multiply it by another 10 for everything you haven't thought about, then probably double it for redundancy.
Because "fire temp" is meaningless in isolation. You need to know the temperature is evenly distributed (so multiple temperature probes), you need to know the temperature inside and the temperature outside (so you know your furnace isn't literally melting), you need to know it's not building pressure, you need to know it's burning as cleanly as possible (gas inflow, gas outflow, clarity of gas in, clarity of gas out, temperature of gas in, temperature of gas out, status of various gas delivery systems (fans (motor current/voltage/rpm/temp), filters, louvres, valves, pressures, flow rates)), you need to know ash is being removed correctly (that ash grates, shakers, whatever are working correctly, that ash is cooling correctly, that it's being transported away etc).
The gas out will likely go through some heat recovery stages, so you need to know gas flow through those and water flow through those. Then it will likely be scrubbed of harmful chemicals, so you need to know pressures, flow rates etc for all that.
And every motor will have voltage/current/rpm/temperature measurements. Every valve will have a commanded position and actual position. Every pipe will have pressure and temperature sensors.

The multiple fire temperature probes would then be condensed into a pertinent value and a "good" or "fault" condition for the front panel display.
The multiple air inlet would be condensed into pertinent information and a good/fault condition.
Pipes of a process will have temperature/pressure good/fault conditions (maybe a low/good/over?)

And in the old days, before microprocessors and serial communications, it would have been a local-to-sensors control/indicator panel with every reading, then a feed back to the control room where it would be "summarised". So hundreds of signals from each local control/indicator panel.

Imagine if the control room commanded a certain condition, but it wasn't being achieved because a valve was stuck or because some local control over-rode it.
How would the control room operators know where to start? Just guess?
When you see a dangerous condition building, you do what is needed to get it under control and it doesn't happen because...
You need to know why.

[–] towerful@programming.dev 4 points 1 week ago (1 children)

I love cli and config files, so I can write some scripts to automate it all.
It documents itself.
Whenever I have to do GUI stuff I always forget a step or do things out of order or something.

view more: ‹ prev next ›