Disillusionist

joined 1 week ago
 

Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an "edu" version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the "robots in front of the kids so they can learn to dance with them" (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn't something we should probably take lightly. In very important ways, AI isn't comparable to technologies that came before it.

The kind of reasoning we're hearing from those educators in favor of AI adoption in schools doesn't seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn't sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it's been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It's really a stretch to say it's had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven't. We're still scrambling and debating about how we should be using it in general. We're still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they're young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to "offload" thinking onto it — which doesn't seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I've heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what's real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we're hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

[–] Disillusionist@piefed.world 18 points 1 day ago* (last edited 1 day ago)

Thank you for kicking this hornet's nest. There is a lot of great info and enthusiasm here, all of which is sorely needed.

We have massive and widespread attention paid to every cause under the sun by social and traditional media, with movements and protests (deservedly) filling the streets. Yet this issue which is as central and crucial to our freedoms as any rights currently being fought for (it intersects with each of them directly), continues to be sidelined and given the foil hat treatment.

We can't even adequately talk about things like disinformation, political extremism, and even mental health without addressing the role our technologies play, which has been hijacked by these bad actors, robber barons selling us ease and convenience and promises of bright, shiny, and Utopian futures while conning us out of our liberty.

With the widespread, rapidly declining state of society, and the dramatic rise and spread of technologies like AI, there has never been a more urgent need to act collectively against these invasive practices claiming every corner of our lives.

We need those of you recognize this crisis for what it is, we need your voices in the discussions surrounding the many problems and challenges we face at this critical moment. We need public awareness to have hope of changing this situation for the better.

As many of you have pointed out, the most immediate step we need to take is disengagement with the products and services that are surveiling, exploiting, and manipulating us. Look to alternatives, ask around, don't be afraid to try something new. Deprive them of both your engagement and your data.

Keep going, keep resisting, do the small things you can do. As the saying goes, small things add up over time. Keep going.

[Edited slightly for clarity]

[–] Disillusionist@piefed.world 4 points 2 days ago

Hilarious. I bite my tongue so often around these kinds of situations it has permanent tooth imprints in it. But you're right, someone needs to figure out how to get them to stop tolerating this horrific nonsense.

[–] Disillusionist@piefed.world 13 points 2 days ago

The more people who demand better out of their employers (and services, governments, etc.), the better we'll get of those things in the long run. When you surrender your rights, you worsen not only your own situation, but that of everyone else, as you validate and contribute to the system that violates them. Capitulation is the single greatest reason we have these kinds of problems.

We need more people doing exactly as you did, simply saying no. Thank you for fighting, and thank you for sharing. Best wishes in your job hunt.

[–] Disillusionist@piefed.world 2 points 4 days ago

I do think you're absolutely right. I know people doing exactly that — checking out — and it does seem like a common response. It is understandable, a lot of people just can't deal with all that garbage being firehosed into their faces, and the level of crazy ratcheting up through the ceiling. And that reaction of checking out is one of the intended effects of the strategy of "flooding the zone". Glad you pointed that out.

[–] Disillusionist@piefed.world 1 points 4 days ago (1 children)

No secret, ML is Marxist-Leninist. They tend to have a similar focus and way of framing things as what I'm picking up from you.

[–] Disillusionist@piefed.world 1 points 4 days ago (3 children)

Odd statement to cut and flip around out of all of that text. Reminds me a lot of ML.

[–] Disillusionist@piefed.world 8 points 4 days ago* (last edited 4 days ago) (1 children)

Fair enough. I posted this rather long comment elsewhere, but what I said there pretty much explains a lot of my thoughts on the situation.

I wouldn't put much past the current American administration. I haven't been able to shake this impression that we might really be looking a the telegraphing of an invasion. From what we know and have seen, the administration is very much itching to apply the fullest extent of its powers. It's defined by unprecedented and extraordinary use of extralegal action and complete disregard for how it might be seen by the world at large.

They said for years that Russia wouldn't move on Ukraine, and then green men marched in and took over Crimea. It's no secret how much America is becoming increasingly like Russia in every way. US already has significant military presence in Greenland — a green men play would be really easy. And Greenland also has a surprising number of politicians who openly say that they prefer the security offered by Trump's America over Denmark, even as they declare that they want independence (experts argue that independence might make them even more vulnerable to takeover right now). It's easy to assume that at least some Greenland doors would open up to an American green man advance.

Also, as far as consequences for taking over Greenland, we seem to be primarily looking at a breakup of NATO — something that is also on this US administration's longstanding wish list. Experts don't seem to think it's ultimately likely to result in an actual war so much as make it crystal clear that the old rules no longer apply, and that the US isn't a friend (the Article 5 debate is shaky, especially against the prospect of actually going to war against America, and especially while NATO is also dealing with the Russian war in Ukraine). On paper it kind of reads like a win-win-win situation for the current brazen, imperialist, and isolationist American kleptocracy.

I'd say we at least need to take this stuff seriously.

[Edited for formatting]

[–] Disillusionist@piefed.world 20 points 4 days ago (15 children)

Real serious stuff happening right now, including American threats and claims against other nations — like what this post is about (it's really not bullshit at this point). Not everything is about Epstein.

[–] Disillusionist@piefed.world 2 points 4 days ago

Not enough people point to this, even though it sits close to the heart of most of our other big problems right now. Big Tech in general exists in a societal blindspot, even as it actively and deliberately fractures and exploits society from all possible angles for profit and power.

[–] Disillusionist@piefed.world 7 points 4 days ago (5 children)

I wouldn't put much past the current American administration. I haven't been able to shake this impression that we might really be looking a the telegraphing of an invasion. From what we know and have seen, the administration is very much itching to apply the fullest extent of its powers. It's defined by unprecedented and extraordinary use of extralegal action and complete disregard for how it might be seen by the world at large.

They said for years that Russia wouldn't move on Ukraine, and then green men marched in and took over Crimea. It's no secret how much America is becoming increasingly like Russia in every way. US already has significant military presence in Greenland — a green men play would be really easy. And Greenland also has a surprising number of politicians who openly say that they prefer the security offered by Trump's America over Denmark, even as they declare that they want independence (experts argue that independence might make them even more vulnerable to takeover right now). It's easy to assume that at least some Greenland doors would open up to an American green man advance.

Also, as far as consequences for taking over Greenland, we seem to be primarily looking at a breakup of NATO — something that is also on this US administration's longstanding wish list. Experts don't seem to think it's ultimately likely to result in an actual war so much as make it crystal clear that the old rules no longer apply, and that the US isn't a friend (the Article 5 debate is shaky, especially against the prospect of actually going to war against America, and especially while NATO is also dealing with the Russian war in Ukraine). On paper it kind of reads like a win-win-win situation for the current brazen, imperialist, and isolationist American kleptocracy.

I'd say we at least need to take this stuff seriously.

[–] Disillusionist@piefed.world 3 points 1 week ago* (last edited 1 week ago)

This reminds me of an article someone posted titled Homo Stultus: The Case For Renaming Ourselves. It mentions that Homo Sapiens means "wise man", but:

The more fitting name is Homo stultus—“foolish man.”

To most people I'm sure that might sound a bit misanthropic, but:

To rename ourselves Homo stultus is not mere cynicism. It is an act of moral realism. Names shape identity, and identity shapes behavior. To be “wise man” is to assume that wisdom already defines us; to be “foolish man” is to recognize that it does not. Such recognition could mark the beginning of genuine wisdom—the kind born of humility rather than hubris.

I think it's actually a pretty valid wake up call. The article brings up a lot of good points.

Here are a couple more quotes talking about the problem:

The root of our folly lies in the myth of human exceptionalism

Anthropocentrism, in other words, is a kind of education—a cultural conditioning that replaces empathy with hierarchy.

I've actually started thinking we're due for a name change now myself. It's like holding up a mirror that confronts us with an image that maybe most of us never look at very honestly if we can avoid it. It might help if we started looking at ourselves as we are demonstrating ourselves to be by our actions, rather than the more comforting yet ignorant narratives we tell ourselves. Maybe then we can start trying to become actual Homo Sapiens.

[–] Disillusionist@piefed.world 3 points 1 week ago* (last edited 1 week ago) (1 children)

Shining a light on a problem is good, directing people to resources where they can seek help is also fine. The problem I have with this article is that it steers into policy with statements like:

"Experts are urgently calling for a national strategy on pornography"

and the ambiguous claim that:

"the government aren’t doing [enough].”

What role are they implying that government should have in any of this? By and large it seems like governments generally tend to respond to "addiction problems" with some form of ban. Anti-porn legislation seems to amount to poorly drafted, ill-considered blunt instruments that also seem very likely to cause more problems than the issues they claim to address (and often backed by dubious special interests that clearly have other agendas). They present the claim that it's:

"Not an anti-porn crusade"

But the article doesn't mention any other kind of action or involvement the government might take in response to the problem.

Articles that cover subjects as controversial and consequential as this should be especially careful and informative in the way they discuss them otherwise they run the risk of merely fanning the flames.

[Edited for clarity]

view more: next ›