290
The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself
(theconversation.com)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
The dependency we will develop on AI will enable a kind of leverage over entire populations that borders on a national security risk.
Which is why some nations will outright ban it. And others will sleep on the risk until it is too late. It's the "should we do anything about this social media stuff"-discussion all over again... but on steroids.
We can see it already, the amount of people who rely on it for... literally any information request. And it's freakishly hard to fight against it. Mainly because the same companies which push those AI products, are actively making traditional information sources worse (e.g Google Search).
I feel the effect when it happens as I am required to use AI in my work. I have to acknowledge it and take steps to not offload memorizing and analysis of things solely to an AI assistant. Mitigating this impact takes time and effort. The danger is that AI is good enough that the gains in speed may outweigh the risks and the cost of errors. If the efficiency is high enough, meeting the performance output standards required to hold a position at a company will not be possible without using AI and doing so in a way that makes it impossible to mitigate the formation of dependencies. People will have to use AI in a way that ensures dependency in order to have the job. The costs are bourne by the workers, the benefits reaped by the owners.
That's the leverage that will reshape our society, we will be forced to work in ways that make us worse at learning, memorization and analysis. Ask a product owner a question and they have to reach for AI because the environment makes it impossible to have the answer without it. And if they can get the answer from AI, so can the person asking the question. So with AI adoption, leadership, decision making and expertise are all transfered to ownership, decimating those middle roles between implementation and ownership that the entire office environment is built around. They're also trying to use it to replace implementation as well, as with software engineers.
For business, AI is a massive opportunity to reduce their dependency on human labour, while making remaining labour dependent on AI. It's a nightmare for a society and for human beings. If robotics manages to accelerate alongside, then what is a population even capable of doing to protect themselves from the harm of this corporate empowerment? No jobs, no money, no legal access to resources and facing an autonomous robotic security system protecting those resources?
History shows self-interest and concentrated power leads to mass suffering. I don't see how these new technologies, in the hands private power, will produce anything different this time.