BranBucket

joined 2 years ago
[–] BranBucket@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

Oof. So little faith in your fellow man.

Guilty as charged. I often wonder what effect dealing with quality control and safety has on my mentality. Much like first responders see a lot of people at their worst, I see a lot of them at their dumbest and laziest.

I think we're still at a net gain over where we were in 1906, but that's subjective. Most of us live longer and more comfortable lives, but that could change if we're not careful, and I don't think we're being particularly careful in this decade. I'm a bit pessimistic, but I don't see it as a bad thing. Back on aviation, the old saying is that it takes an optimist to invent the airplane, and it takes a pessimist to invent the parachute.

I'd rather keep meteors out of it. Some of the planet is quite pretty and whatever species takes over for us might appreciate the view.

[–] BranBucket@lemmy.world 1 points 1 day ago (2 children)

Yup. I'll take the bet.

After all, your expectation of the impact of AI is arguably the better outcome for humanity, isn't it? I'm expecting a sharp increase in horrific industrial accidents and the slow but steady regression of human intellect until we're all mindless drones from sector 7-C. =P

That's a good bet to lose.

Besides, actually paying out on oddball, five year old bets is the kind of thing that made the pre-social media, pre-AI internet great, and I miss that.

[–] BranBucket@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

If I'm arguing in good faith, it's both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I'm going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.

In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we're going to have to contend with it until it undeniably starts costing more than it's worth, and if that cost is cultural or societal instead of financial, it might never go away.

I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

I don't pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860's with the telegraph.

[–] BranBucket@lemmy.world 2 points 1 day ago (4 children)

Right. We can't blame the existence or even the use of AI fully. But the way AI is often used, and the way my armchair studies of human nature tell me it will continue to be used, I think will lead to more events like this. The trend of easy access and low retention did indeed start before LLMs, but they don't seem to be a remedy for it from what I can tell. If anything, they're neutral, and I'd argue they make it worse.

We could (and frankly probably will need to because I doubt AI will be abandoned due to the sheer volume of cash that has been dumped into it), build processes to account for the failings of LLMs and the failings in how we use them. Or we could look at existing methods, those we understand and have learned to work effectively with, and reapply them as needed.

My bet is that LLMs and genAI will exacerbate the trend of being info rich and knowledge poor, and the processes we have to create in order to safely and effectively apply it are going to be more costly than any efficiency we get out of adopting it. I could be wrong, but I'd bet you a six-pack of whatever you drink that I'm not. Collectable in five years, if Lemmy hasn't been replaced by LLMmy by then. I'll even ship international if need be. =)

[–] BranBucket@lemmy.world 1 points 1 day ago (6 children)

You're right that we can't rule complacency and human error. And we have internal reviews precisely to account for complacency. Again, I'm intimately familiar with both the safety culture and people involved, this is an unusual and recent development. But I suppose asking you to take my word for it might strain credulity. That is what it is.

I'd be inclined to agree with you more if it weren't for how widespread the smaller issues are. The general trend, among the old and young, is less actual knowledge of the job and more reliance on quick access to information that often isn't applied properly in context. It existed before AI, and has gotten worse with it's introduction. Something about instant access to information seems to harm retention and application of that info. Pretty obvious trend for me, as part of my job is to ensure it's retained and applied properly.

Those procedures built around autopilot, along with other issues of flying more complicated modern aircraft, were dealt with by controlling how information flowed, how it was communicated, and the weight of authority it was given, often with human processes like Crew Resource Management. As I've said, the presentation of information absolutely changes how people understand and apply it. CRM helps because it prompts people to present information to each other in a way that facilitates better decision making and delegation in a crisis.

But autopilot has always been beneficial, right from the start it was obvious. It reduces pilot fatigue on long-haul flights and helps keep air traffic in the right place. Pilot complacency was never really a worry, but malfunctions were.

In the end, it's not that it can't be done. We could adjust our processes to include LLMs simply because people think they're neat. It's just that there's no compelling evidence that its better for distributing information, developing procedures, or teaching people how not to die.

[–] BranBucket@lemmy.world 2 points 1 day ago (2 children)

I would ask it a careful question, and I would get a well worded, persuasive, but ultimately careless reply that's just repetition of information and devoid of any new reasoning or insight.

I would carefully ruminate on this reply, and find that at best, it's factually correct because it's an echo of the training data fed into the model, and although it sounds highly persuasive, it likely will need additional work to be adapted into the specific context and details of my situation.

But, that's not my main complaint. My complaint is that medium used seems to prevent people from doing that analysis. I think this is very much in line with what Neil Postman wrote about in Amusing Ourselves To Death and Technopoly. These tools seem to use us, sneakily adjusting our perceptions of what the information means, rather than us using the tools.

Is it possible to be careful and use it the way you describe in your thought experiment? Yes. Is it likely that people will be? No, and we seem to be seeing example after example of that every day.

[–] BranBucket@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

I drank and smoke occasional cigarettes starting around age 16.

However, this was the 90s, and that sort of thing seems to have been tolerated a lot more back then, at least in my area. I can remember getting busted with tequila at age 19 and the main complaint being that we were cutting limes on an antique table without using a cutting board.

And many of the things that are seen as huge problems with both alcohol and tobacco were just starting to get widespread attention at that time.

[–] BranBucket@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (4 children)

It's not that I don't think there aren't legitimate uses for AI or that it could be used as a learning tool.

It's that I doubt it's better than current learning tools largely because the nature of the medium seems to turn off the kind of critical thinking you're describing. The medium and language of a message can have a profound effect on how we understand and process information, often without us even realizing it, and AI seems to be able to make those changes far too easily.

[–] BranBucket@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (8 children)

As I alluded to in another comment in this thread, the worst I've personally seen were procedures develeoped that would have had people entering areas that were not just hazardous, but incompatible with human life, and performing maintenance on fully energized industrial systems without safety constraints in place. Both cases would have caused fatalities if someone blindly followed the checklist as written. An internal review caught these mistakes, but they should have never made it that far.

The people designing the procedure checklists missed them possibly because, as you said, AI lies beautifully, but I think it was also because many people seem to have an inclinination is to trust it over their own judgements and knowledge. These were supervisors with years of direct experience, the red flags should have been instantly obvious. If they'd written it out by hand, the proper order of events would have been almost muscle memory, what made them so careless?

They claimed they just used AI to format and grammar check their work, and I don't have logs to prove or disporve that. But this is more than just a hallucination, it's a lack of reasoning similar to the car wash problem, but with much more severe consequences. TBH I'm not sure even adding specific knowledge of our equipment and facilities would fix it, let alone just a reduction in hallucinations.

On top of that, I've seen a long, long time trend of people who just will not take the time to read and understand the sum total of information needed to safely and correctly perform our work. It's a lot, but we do complicated and dangerous things. They've replaced knowing things with Googling them or searching through documents to find a possibly out of context quote. Failed safety and regulatory compliance inspections are far more common because people just don't know what they need to know despite having all that information at their finger tips. Nothing seems to be processed or retained, it's just sort of gawked at and repeated.

They aren't dumb. I work with them. I know them. It's not just stupidity and it's not just hallucinations. Our tools are using us, and it should always be the other way around. A tool that can't be used, in both the philosophical and literal sense, should be discarded.

I'm not trusting AI anytime soon, and I remain suspicious of everyone until they prove themselves to actually understand what's going on.

I'm willing to reconsider things as technology improves, but I wouldn't bet my 401k on LLMs being worth a shit anytime before I retire.

[–] BranBucket@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

People who view in-game achievements and custom gear as status symbols or aren't entertained by games in the same way as others. Some people's enjoyment may come from "having" as much as "earning".

Want a sweet bit of gear for your character that's only available if you grind eight hours a day for a set week in November as part of a charity event? Need to do daily quests of missions for weeks on end to maintain your ranking? Can't find time to sleep and finish timed events?

Different strokes, you know?

For me, I'm okay with not being elite, and letting those who have the time or drive to do extreme challenges have something unique. Then again as I've grown older I've come to resent games with massive time sinks that feel like a second job to me. I just want to relax a little. A game shouldn't consume all my free time just for me to make meaningful progress. But I'm not going to say those who enjoy huge grinds are wrong.

But for others, they may feel left out or cheated when real life commitments limit their ability to do these things and I'm not going to tell that's invalid. It's just games, and if they want to pay for AI to get a special hat for their character, that's fine by me.

EDIT: I will say, for co-op and competitive games, this would annoy the shit out of me, regardless of the AI being good or bad. But then again, I don't really enjoy anything that's online only with a ranking system or forced co-op these days, largely because of vast numbers of people who seem to take them far, far too seriously for my tastes, so that's going to be something I don't worry much about. To each their own, you know? But I get why it would be a pain.

[–] BranBucket@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (10 children)

Granted, I'm okay with whatever works best for teaching the process.

I believe a great deal of people of all ages simply treat Google, and now chatbots, as "answer machines". They grab a hold of something out of the first few results, sometimes it's just the text of the link itself, and that's your answer. No analysis, no critical thinking, no further thought needed.

I feel like search engines and AI have become a form of thought terminating cliche for some. People trust the information presented far more than they should and don't seem to be able to analyze or apply it in a broader context. They double check if a human tells them the sky is blue, but site a Facebook post as gospel even after it's led them to disaster.

I get this is human nature to an extent, but it's also partly the nature of the medium. Something about the internet and computers makes people want to trust that information without deeper analysis. I think that's partly because of how we regard them culturally and we should move away from the unfounded belief that computers do better analysis than humans, faster, more apt to find certain types of small details, but not necessarily better in all contexts. Critical thinking and analysis should be assisted by technology, not replaced by it.

Sitting down, reading, collatecting information, processing and analyzing that information, and then writing what you've learned is a skill everyone needs to cultivate no matter how advanced our technology becomes.

[–] BranBucket@lemmy.world 2 points 2 days ago (15 children)

Offline research and note taking too. Digital if you don't want to waste paper, but encyclopedias should make a comeback.

 
 

SMP Selle TRK medium. Super comfy. Best decision I've made since buying the bike.

view more: next ›