MangoCats

joined 1 year ago
[–] MangoCats@feddit.it 1 points 3 days ago (2 children)

The real issue with these candidates is: we're not electing the person, we're electing the team they putatively command, the network you refer to - all the people they work with and trust and will continue to use into the future if re-elected. And that's the twist, the candidate can be a total figurehead, a loose cannon moron even, but who's behind them is what's really important.

Reagan demonstrated this in spades: the lead actor of Bedtime for Bonzo? Really? We finally topped that absurdity with 45, but it was still an unprecedented doozie - his job was to read the script (teleprompter) deliver the lines, end of story - the machine behind him was what put "his" policies into motion.

[–] MangoCats@feddit.it -1 points 3 days ago

Cap at 65 is arbitrary and extreme... I might have thought that when I was 12, but reality is: experience matters. Still, dementia matters too, but 65 is no guarantee of dementia, yet. https://old.reddit.com/r/DownWithIncumbency/

[–] MangoCats@feddit.it 1 points 3 days ago (1 children)

I hope I die before I get old.

I don't mind if I live to 120, but when I get to that stage where I need other people to do more for me than I can do for myself... it's time to quit before I get farther behind.

[–] MangoCats@feddit.it 2 points 3 days ago

Committee assignments are granted by seniority, so

that needs to change. Maybe cap the senority advantage at 10 years, or 5? Draw god-damned straws before giving the gavel to the most senile.

[–] MangoCats@feddit.it 2 points 3 days ago (2 children)

You're telling me I'll still have some umph when I'm 64 - that you'll still need me, that you'll still feed me?

Hard to believe from here in the run-up to it, seems like I'm picking up speed on the back side of the hill...

[–] MangoCats@feddit.it 1 points 3 days ago (2 children)

I wrote this 3 years ago, just as true today as then:

https://old.reddit.com/r/DownWithIncumbency/comments/uxgcrp/we_should_not_serve_the_dead/

U.S. Senator Dianne Feinstein was born in 1933, assumed her office in 1992, and still serves today at the age of 88. Thank you for your service Dianne, but don't you think it's past time to groom a younger protege to take your place?

The laws shaped and passed by our statesmen, elder and otherwise, will control how people live for decades to come. Not only should they be of sound mind when crafting and considering these laws, they should also have a bit of skin in the game: live with the results of their decisions for at least some time.

U.S. Presidents must be at least 35 years of age. I propose that, ideally, they should also not be much over 70 years of age while serving. To gently shape our current system toward this ideal, we might modify election laws to deduct age points from candidates who will be over this age threshold while serving. For instance:

For every year in which the candidate would be over the age threshold while serving their term, one electoral point is deducted from their total for each year of age they will be over the threshold.

If the ultimate age threshold is 70, and a presidential candidate will be 66 years of age or younger when sworn in for a four year term, then that candidate will receive all electoral points the same as they do today. But, if they are 67, and their elected term runs at least 6 months past their 70th birthday, then one electoral point is deducted from their total when deciding the election outcome for that period of "age over threshold" during their term. If they are 68, then there would be one point deducted for the third year of their term and two points deducted for the fourth, a total of 3 points off. If they would be 80 when assuming office then that would be 10+11+12+13=46 electoral points deducted, making victory difficult, but not impossible.

If an older candidate truly is the better choice and will win by such a wide margin, then let the people choose them to continue to serve. But their advantages need to be clear over a younger candidate.

To avoid disruption to the current system and fields of candidates, the age threshold could be "soft started" at 90 and reduced by one year per year until it reaches 70. So, if this system of old age disadvantage were started in the year 2025, it would not reach its final age of 70 until 2045.

Senators and members of the House of Representatives could face similar age disadvantages, granting 0.25% of the popular vote per year of age that would be served over the threshold age. If an 88 year old senator runs for re-election against an age threshold of 70, they would be granting their opponent an (18+19+20+21+22+23)*0.25 = 30.75% advantage in the election, in other words they would need to win more than 80.75% of the popular vote in order to be elected against a candidate 64 years of age or younger.

We've got the wisdom of the elders in the Supreme Court, keep the new laws relevant to the people who they will be impacting.

[–] MangoCats@feddit.it 7 points 3 days ago

To an extent, COVID hit some of the oldest boomers - it was a little early, if we had held of COVID for another 10 years it definitely would have been a prime boomer expiration accelerator.

As things are, my parents are some of the earliest boomers and they're just turning 80. The death-rate boom should be picking up speed soon. Too bad they're giving all their acumumulated wealth to the healthcare industry instead of their kids.

[–] MangoCats@feddit.it 1 points 5 days ago

My last (confirmed) faulty hardware crash (resulting from user operation, not just an outright failure to boot, or random crash "for no particular reason" other than a program trying to access a failing SSD or similar) came in the late 90s with a GPU card that would take down the system bus voltage in response to certain CAD operations - repeatably - do this rotation, watch the CPU do a hard reboot every time. Stay away from the GPU heavy operations - no problems.

These days the browser is the OS for over half of what happens on my work machines. And they're almost, but not quite, 100% reliable, until they're not. Working out those rare problems takes a long time, and with "progress" it feels like they've reached a kind of equilibrium where the rate of new problem introduction is about the same as the rate of known problem fixes.

[–] MangoCats@feddit.it 0 points 5 days ago (2 children)

More often than crashing outright, I hit situations where the browser just isn't working, won't load pages or won't execute button clicks on pages or similar and the only thing (on Windows) that will fix it is a reboot. In Linux usually closing the browser and restarting will get it going again. Yeah, BSODs are rare lately (though not entirely gone), but malfunctions still abound.

[–] MangoCats@feddit.it 2 points 5 days ago

One that I have to copy-paste over and over are vulnerabilities in the CUPS printer driver chain that don't apply because we don't print arbitrary things, we only print things that we create. Yeah, there's a vulnerability here in image-magick if you throw it such and such maliciously crafted... well, we only allow it to process our internally generated reports and there's no pathway for maliciously crafted input to reach it, so...

[–] MangoCats@feddit.it 1 points 5 days ago

You need a person with a lot of experience to get something useful from this bot,

Not entirely true. You get a lot more useful things from the bots when they are driven with people with a lot of experience. The problem that's coming now is a magnified version of the "skript kiddiez" from early Google days where inexperienced people could just find exploits on the web and copy-paste them. Today, the LLMs actually can find vulns and develop exploits for people who don't have any knowledge of the languages the exploits are being written in.

every time we actually measure, the results that your experienced person will be quicker and better not using it at all, and doing the same work themselves.

From my perspective, your data is out of date. I've been tracking the "usefulness" of frontier models in accelerating development speed for experienced people over the past 2 years. Two years ago, total waste of time. One year ago - equivocal, sometimes it accelerates an implementation, sometimes not. Six months ago, it was clearly helping more than hurting in most cases, and it has only continued to improve since then.

Knowing what you are doing helps. Trusting that the LLM will help, helps - if you set out to show it's a waste of time, a waste of time it will be. Lately, treating the LLM like a consultant, just hired, likely to disappear any day, helps. Take the time to run all the formal processes, develop the requirements documentation, tests, etc. Yes, that "slows things down" but not in the long run across realistic project life cycles - even with humans doing the work. Also along those lines: keep designs modular, with modules of reasonable complexity - monolithic monster blocks of logic don't maintain well for people either. LLM implementations start falling apart when their effective context windows get exceeded (and, in truth, people do too.)

[–] MangoCats@feddit.it 1 points 5 days ago

no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate

Yeah, that's cooked data - it's too easy to ask the LLM to give you the CVE list, the CVSS distribution / severity buckets, timelines, everything you might want.

I have LLMs doing pull request reviews and as a default response they just give potshots, but if you prompt them they will point directly to the files and line numbers where the problems they are pointing out reside...

view more: ‹ prev next ›