After watching president after president age like milk, I'm convinced that it's not the stress of office alone doing it. Even during normal times, it's an incredibly demanding position without a lot of room for downtime. Add an attending physician to the mix and you're good to go, packed to the gills with uppers, downers, antibiotics, blood thinners, beta-blockers, anti-parasitics, testosterone, you name it. While you're able to keep the schedule of a 30-year-old this way, you're also burning the candle at both ends and in the middle. It's little wonder that such a person can more or less shake off a bad diet at 80+.
dejected_warp_core
Jeorgia?
Placing every last presidential decision on a coin flip would have yielded better results.
Easily done:
- Wait for AI bubble to burst
- Buy render farm on the cheap
- Avatar 4
Although I'm assuming that the raw rendering pipeline is what costs the most. I could be dead wrong about that - there's a whole army of artists, technical people, and actors that go into such a production too.
Thank you for your service.
like sifting through resumes.
Then we need to call it what it is: This exchange is an HR screen.
Make no mistake, the oligarchs see the personal computer as a 40-year-old experiment that has failed, or needs to fail. They want their mainframes and CPU/hr billing back. Server hosting for enterprise uses has already gone this way for the most part. Small consumers are next.
As far as I recall, that's how it went.
I have a lot of thoughts on this because this is a complicated topic.
TL;DR: it's breakthrough tech, made possible by GPUs left over from the crypto hype, but TechBros and Billionaires are dead set on ruining it for everyone.
It's clearly overhyped as a solution in a lot of contexts. I object to the mass scraping of data to train it, the lack of transparency around what data exactly went into it, and the inability to request one's art from being excused from any/all models.
Neural nets as a technology have a lot of legitimate uses for connecting disparate elements in large datasets, finding patterns where people struggle, and more. There is ample room for legitimately curated (vegan? we're talking consent after all) training data, getting results that matter, and not pissing anyone off. Sadly, this has been obscured by everything else encircling the technology.
At the same time, AI is flawed in practice as it's single greatest strength is also its greatest weakness. "Hallucinations" are really all this thing does. We just call obviously wrong output that because that's in the eye of the beholder. In the end, these things don't really think, so it's not capable of producing right or wrong answers. It just compiles stuff out of its dataset by playing the odds on what tokens come next. It's very fancy autocomplete.
To put the above into focus, it's possible to use a trained model to implement lossy text compression. You ship a model of a boatload of text, prose, and poetry, ahead of time. Then you can send compressed payloads as a prompt. The receiver uses the prompt to "decompress" your message by running it through the model, and they get a facsimile of what you wrote. It wont' be a 1:1 copy, but the gist will be in there. It works even better if its trained on the sender's written work.
The hype surrounding AI is both a product of securing investment, and the staggeringly huge levels of investment that generated. I think it's all caught up in a self-sustaining hype cycle now that will eventually run out of energy. We may as well be talking about Stanley Cups or limited edition Crocs... the actual product doesn't even matter at this point.
The resource impact brought on by record investment is nothing short of tragic. Considering the steep competition in the AI space, I wager we have somewhere between 3-8x the amount of AI-capable hardware deployed than we could ever possibly use at the current level of demand. While I'm sure everyone is projecting for future use, and "building a market" (see hype above), I think the flaws and limitations in the tech will temper those numbers substantially. As much as I'd love some second-hand AI datacenter tech after this all pops, something tells me that's not going to be possible.
Meanwhile, the resource drain on other tangent tech markets have punched down even harder on anyone that might compete, let alone just use their own hardware; I can't help but feel that's by design.
Sweet tap-dancing christ, this whole thread. If there's anything I've learned today, it's that some teachers are the most petty dictators that cannot tolerate being proven in the wrong, nor can handle having their decision making skills challenged. They're out there doing real lasting damage to people and their ability to think critically.
It's almost enough to make me want to go into education, just to displace one of these tyrants.
Sincerely, I'm sorry all of you had to go through any of this. Here's hoping you have support and find closure.
I was gonna say this is at least Digg 3.0.
It's really the worst. For the uninitiated, the platen where your bags go is actually a scale. The self-check-kiosk software waits for this bagging scale to quit moving (see: de-bouncing) before weighing and approving the scan and purchase of a single item. This is why, occasionally, if you're too fast or too slow, the kiosk gets angry and makes you flag down an attendant.
That's not a problem for 10 items or less, but for a whole cart? All that waiting around adds up. Because of all that, it's literally impossible to achieve the same or better speed than an employee.