BranBucket

joined 2 years ago
[–] BranBucket@lemmy.world 2 points 4 hours ago

Absolutely intentionally designed to be that way by the GOP and blatantly obvious when you look at their voting records.

[–] BranBucket@lemmy.world 2 points 4 hours ago

One of their main arguments against taxes is that government will always waste tax dollars due to corruption and incompetence... Which is a self fulfilling prophecy, as they've proven to be some of the most corrupt and incompetent political leaders in history.

[–] BranBucket@lemmy.world 1 points 7 hours ago* (last edited 7 hours ago)

Was there ever any doubt? They've been trying to tie devices to identities for years now. This is just the end game.

The funny thing is, it's kinda hard to prove that surveillance based ads actually work. Ad platforms like to throw numbers of how may people see ads around, but it's hard to actually tie those numbers to sales. What's worse is that the way these ads are bought, sold, and how ad placement is commodified means that everything remains intentionally vague to the people buying ads. In most cases, all they know is that "everyone is doing it" and "making lots of money" and if they don't they'll be left out of the revenue party.

Right now, enough people are still drinking the kool-aid, that it'll remain a safe revenue stream for companies like Meta unless something happens that hurts the cash flow of their customers. But, it kinda makes you wonder, if the vagueries of the online add ecosystem caused companies to reconsider the investment during an economic crisis, how would these ad platforms that have recently gotten very cozy with fascists make money?

[–] BranBucket@lemmy.world 1 points 2 days ago

If you want to get into how this happens, and the way it happens with other technologies, I'd suggest Neil Postman's Technopoly and Amusing Ourselves To Death as a good start.

[–] BranBucket@lemmy.world 3 points 3 days ago

And I hate it these days. I really do.

I understand why the better creators make their videos the way they do. I understand why there are channels that just churn out hundreds of low effort vids every day. I get it. At the same time, even the things that are considered quality content in YouTube just don't appeal to me anymore.

People send me links and I can hardly be bothered to watch them, let alone browse for hours.

Oh well.

[–] BranBucket@lemmy.world 2 points 4 days ago* (last edited 4 days ago)

Oof. So little faith in your fellow man.

Guilty as charged. I often wonder what effect dealing with quality control and safety has on my mentality. Much like first responders see a lot of people at their worst, I see a lot of them at their dumbest and laziest.

I think we're still at a net gain over where we were in 1906, but that's subjective. Most of us live longer and more comfortable lives, but that could change if we're not careful, and I don't think we're being particularly careful in this decade. I'm a bit pessimistic, but I don't see it as a bad thing. Back on aviation, the old saying is that it takes an optimist to invent the airplane, and it takes a pessimist to invent the parachute.

I'd rather keep meteors out of it. Some of the planet is quite pretty and whatever species takes over for us might appreciate the view.

[–] BranBucket@lemmy.world 1 points 4 days ago (2 children)

Yup. I'll take the bet.

After all, your expectation of the impact of AI is arguably the better outcome for humanity, isn't it? I'm expecting a sharp increase in horrific industrial accidents and the slow but steady regression of human intellect until we're all mindless drones from sector 7-C. =P

That's a good bet to lose.

Besides, actually paying out on oddball, five year old bets is the kind of thing that made the pre-social media, pre-AI internet great, and I miss that.

[–] BranBucket@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

If I'm arguing in good faith, it's both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I'm going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.

In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we're going to have to contend with it until it undeniably starts costing more than it's worth, and if that cost is cultural or societal instead of financial, it might never go away.

I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

I don't pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860's with the telegraph.

[–] BranBucket@lemmy.world 2 points 4 days ago (4 children)

Right. We can't blame the existence or even the use of AI fully. But the way AI is often used, and the way my armchair studies of human nature tell me it will continue to be used, I think will lead to more events like this. The trend of easy access and low retention did indeed start before LLMs, but they don't seem to be a remedy for it from what I can tell. If anything, they're neutral, and I'd argue they make it worse.

We could (and frankly probably will need to because I doubt AI will be abandoned due to the sheer volume of cash that has been dumped into it), build processes to account for the failings of LLMs and the failings in how we use them. Or we could look at existing methods, those we understand and have learned to work effectively with, and reapply them as needed.

My bet is that LLMs and genAI will exacerbate the trend of being info rich and knowledge poor, and the processes we have to create in order to safely and effectively apply it are going to be more costly than any efficiency we get out of adopting it. I could be wrong, but I'd bet you a six-pack of whatever you drink that I'm not. Collectable in five years, if Lemmy hasn't been replaced by LLMmy by then. I'll even ship international if need be. =)

[–] BranBucket@lemmy.world 1 points 4 days ago (6 children)

You're right that we can't rule complacency and human error. And we have internal reviews precisely to account for complacency. Again, I'm intimately familiar with both the safety culture and people involved, this is an unusual and recent development. But I suppose asking you to take my word for it might strain credulity. That is what it is.

I'd be inclined to agree with you more if it weren't for how widespread the smaller issues are. The general trend, among the old and young, is less actual knowledge of the job and more reliance on quick access to information that often isn't applied properly in context. It existed before AI, and has gotten worse with it's introduction. Something about instant access to information seems to harm retention and application of that info. Pretty obvious trend for me, as part of my job is to ensure it's retained and applied properly.

Those procedures built around autopilot, along with other issues of flying more complicated modern aircraft, were dealt with by controlling how information flowed, how it was communicated, and the weight of authority it was given, often with human processes like Crew Resource Management. As I've said, the presentation of information absolutely changes how people understand and apply it. CRM helps because it prompts people to present information to each other in a way that facilitates better decision making and delegation in a crisis.

But autopilot has always been beneficial, right from the start it was obvious. It reduces pilot fatigue on long-haul flights and helps keep air traffic in the right place. Pilot complacency was never really a worry, but malfunctions were.

In the end, it's not that it can't be done. We could adjust our processes to include LLMs simply because people think they're neat. It's just that there's no compelling evidence that its better for distributing information, developing procedures, or teaching people how not to die.

[–] BranBucket@lemmy.world 2 points 4 days ago (2 children)

I would ask it a careful question, and I would get a well worded, persuasive, but ultimately careless reply that's just repetition of information and devoid of any new reasoning or insight.

I would carefully ruminate on this reply, and find that at best, it's factually correct because it's an echo of the training data fed into the model, and although it sounds highly persuasive, it likely will need additional work to be adapted into the specific context and details of my situation.

But, that's not my main complaint. My complaint is that medium used seems to prevent people from doing that analysis. I think this is very much in line with what Neil Postman wrote about in Amusing Ourselves To Death and Technopoly. These tools seem to use us, sneakily adjusting our perceptions of what the information means, rather than us using the tools.

Is it possible to be careful and use it the way you describe in your thought experiment? Yes. Is it likely that people will be? No, and we seem to be seeing example after example of that every day.

[–] BranBucket@lemmy.world 1 points 5 days ago* (last edited 4 days ago)

I drank and smoke occasional cigarettes starting around age 16.

However, this was the 90s, and that sort of thing seems to have been tolerated a lot more back then, at least in my area. I can remember getting busted with tequila at age 19 and the main complaint being that we were cutting limes on an antique table without using a cutting board.

And many of the things that are seen as huge problems with both alcohol and tobacco were just starting to get widespread attention at that time.

 
 

SMP Selle TRK medium. Super comfy. Best decision I've made since buying the bike.

view more: next ›