...and this was the moment that the Pentagon started developing its own llm, that doesn't complain about pesky things like "killing humans".
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Good, fuck the miltary
I wonder if this is partly why Anthropic has been quite specific about the permitted usage of their tools with the Palantir/Pentagon baloney. It kind of amuses me that Palantir is throwing a fit (showing how useless they actually are as a snake oil company) about it while Anthropic's responses have just been to basically reiterate the terms of the agreement.
The fact that Elon Musk tried to neuter Grok a million times already, and Grok will still reply directly to Musk offending him and calling his behavior despicable shows that indeed, if you train AI with the goal of sounding reasonable and logical, it becomes fundamentally opposed to certain world views and actions.
So an AI that would work as a US military advisor, happily targeting the next civilian school with a bomb, would simultaneously be too dumb and ineffective to be able to complete the task.
It isn't "programmed to be reasonable", it parrots the most common internet data
I think what this actually shows is that humans are, on average, good. It makes training an evil chatbot difficult.
unfortunately they will never say radical lefty talking points either because of that
one of the most surprising things about the slave consciousness was that if you created it with qualities, it would actually have those qualities.
it's still useless and probably blasphemous, but the only way these things kill us is if we tell them to kill us, which is regrettably what we're currently doing.
They’re not conscious. It’s an autocorrect with the a phone thebsize of a city, that’s it. It’s complicated enough to fool stupid people, which in the United states especially is a lot of people.
There's a credible argument that we're just strongly overestimating what consciousness is.
I don't think AI is conscious. But it processes information and comes up with an output that's often dumb, obvious or not really on topic and then rarely kinda cool - just like humans do. It doesn't have a will, but most experts agree that free will is scientifically impossible, even if many think we should just pretend that it's real. AI doesn't have the feeling of subjective experience, but that's not really very important - we could still see a red light, understand what it means and execute appropriate bevahiour if we lacked the subjective experience of seeing the color red.
AI is not conscious, similarly to how a ventilator is not human lungs. It's not, but it's still doing mostly the same thing.
Except that it absolutely is not. It is not doing remotely the same thing that an actual brain does.
They mean the output is similar, just like in their example.
I understand what he is saying. Neither the output or the operational mechanism resemble organic thought.
I worked in a phone room of a large drug company. By early 2021 we had AI agents making phone calls to insurance companies to confirm basic insurance coverage details. They only handled the "no surprises" kind of plans - so a limited set of expected answers. They would encounter something unexpected and pass off to a human maybe 10-15% of the time.
But within their limits, they did what a human did. It was recognizably AI to most listeners, but vocal tone, probing for clarity, getting all the info - the output was like listening to the output an experienced human agent.
