Like when the coach of a big sports team is interviewed on the half-time break of a game they're losing, this guy will of course going to say they're still going to win.
It would only be news if he confessed they were fucked.
This is a most excellent place for technology news and articles.
Like when the coach of a big sports team is interviewed on the half-time break of a game they're losing, this guy will of course going to say they're still going to win.
It would only be news if he confessed they were fucked.
Why is that wizened old man dressing in a leather jacket?
wizened?
wizened; intransitive verb : to become dry, shrunken, and wrinkled often as a result of aging or of failing vitality
Yeah, I’m sure they do
These people make me thirsty for guillotine time
very hard to convince someone of something when their paycheck requires they don't understand it. seroiusly "hay I'm one of the people who has been demanding my company finish AI and roll it out and stuff it into everything we sell to look like massive growth so my stocks will start going up even though the language models aren't even designed to do what were claiming, and they don't even work well as language models, and we have no real use for them and they're horrifyingly costly to run, but no, I don't think it's a bubble."
He isn't Fonzie but he has definitely jumped the shark.
Autoerotic asphyxiation from farts in an echo chamber produce the wildest trips.
I regularly use GH Copilot with Claude Sonnet at work and it’s a coin toss whether it’s actually useful, but I overall do find value in using it. For my own use at home, I don’t do subscriptions for software and I’m also not giving these companies my data. I would self-host something like Qwen3 with Llama.cpp, but running the flagship MoE model would basically require a $10k GPU and one hell of a PSU. I could probably self-host a smaller model that wouldn’t be nearly as useful, but I’m not sure that would even be worth the effort.
Therein lies the problem. My company is paying a monthly fee for me to use Copilot that would take like 20 years to pay for even one of the $10k GPUs that I’m likely hogging for minutes at a time, and these companies are going to spend trillions building data centers full of these GPUs. It’s obvious that the price we are paying for AI now doesn’t cover the expense of actually running it, but it might when these models become less resource-intensive to the extent that they can run on a normal machine. However, in that case, why even run them in a data centers instead of just running them on the user’s local machine? I’m just not following how these new data centers are going to pay for themselves, though maybe my math is wrong, or I’m ignorant of the economies of scale hosting these models for a large user base.