I love how everyone is so desperate to make Gabe to be a terrible person.
PC Gaming
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
We acting like people in the art community weren't hyped up over AI until they started generating images. Before chatgpt, it was all about automating coding/it and other jobs that arent considered art. Back then it was all about how everyone could pursue their passions. The only people not excited were all the transportation employees and factory workers that had been told by the general public how excited they were to replace them
As a social scientist, pre Chat GPT NLP was like opening a whole new world of possibilities. We could finally at scale analyze one of the richest sources of behavioural data in an empirical statistically driven manner.
Now, even as I do research with NLP to continue these goals, I can't bring myself to every defend these tools. If they disappeared tomorrow, we'd lose a tool but we'd prevent so much undue suffering
Was this article commissioned by Tim Sweeney?
Altman took the money and then OpenAI abandoned the non-profit structure to become a for-profit entity (2 years ago)
The writing was on the wall for years. I remember memes about Altman in machine learning forums/chatrooms circa 2020, and especially 2021.
Nothing's changed. Anyone in the space who actually looked at what he was doing, knew. Yet the bulk of the public (and investors) lapped the Tech Bro stuff up.
Aaron Swartz said Altman was a sociopath years before AI was a gleam in anyone's eye.
The technologies with the worst potential outcomes will always be pioneered by people with no ethical or moral hangups getting in the way.
Which unfortunately are the same techs that will be elevated by our present economic structure, precisely because those traits are what enable them to make (or grift) a shitload of money.
see:
Leaded Fuel and CFCs - the same fuckin guy!? goddamn hope there is a hell
Obligatory reminder that billionaires are not our friends. But also, donating to AI research in 2018 is quite a different matter than if he had done so in recent years. Most people in tech were somewhere between neutral and enthusiastic towards machine learning back then and few foresaw the monster it would become. Doubt he's as enthusiastic nowadays, considering what it did to Valve's hardware ambitions.
OpenAI, back then, was also a very different organization. They were mostly a non-profit, claiming to be a research organization who's goals were to ensure AI benefited all of humanity. Hell, I'd say Whisper, which that OpenAI did release, was very positive for humanity. It was when Sam Altman saw big dollar signs in GPT2+ that things started changing fast.
Very much this, in 2023 there was a falling out between Altman and the board of OpenAI over this, and Altman was kicked out. However some big shareholders (Microsoft) made a stink and reversed it.
I think many employees close to Altman also went to strike or theaten to leave. But I think he's bad for the (now) company. They should've stayed non-profit
Wellllllll, I dunno about this take seeing as he's still very enthusiastic about it as of less than a year ago, with some very.. hype-style statements about it.
If you can mentally separate the technology from the capitalist orgy around trying to shoehorn LLMs into every possible thing, he's not wrong.
The technology has promise, but the reality of what it can be useful for is complete overshadowed by the hype frenzy declaring the end of all knowledge workers and creatives.
LLMs are significantly better at translation than anything we've been able to design, for instance. But that's not flashy, it doesn't generate seed funding or lure investors so it's largely not what people think of when they hear "AI".
Right, he might be a little further down, but he’s absolutely still on the list. There are no good billionaires.
I mean, I probably would have invested in AI prior to seeing LLMs in action, too, hoping I was funding the cool kind of AI, not this lame shit.
Look, there is one thing if does incredibly well. It makes a fantastic spelling checker.
I've also found a niche use if you are bi- or polylingual:
Write a paper, letter, etc. Paste it in and ask Chat to translate it into another language you know. Then translate it yourself back into your target language. All of the phrasing and word choice will be yours and consistent since it is done in one go, while your original paper may have been done in spurts over weeks.
Hmm. I don't know enough to comment. While it sounds likely, I have heard complaints about translation, such as unexpected shifts in tone, etc.
I’ve done it a couple times and found that it worked well. I’d never use it as a translator for something official or formal but I have used it to help me translate specific words or phrases when I was unsure.
At that time it was still kind of a research project than a "it's going to take over everything" hype and FUD machine.
His opinions on AI today seem more enthusiastic than I would be, but well clear of the delusional level of AI-boosters.
Before OpenAI about faced on being open?
Back then they were still deep into research and the Open part in their name actually meant something. I don't like much about Musk but I feel like its true that they deceived people that supported their initial mission just to go private when the market went haywire for AI. I feel like them shedding their non-profit status shouldn't have been an option as so many people donated to them in good faith