this post was submitted on 16 Aug 2023
12 points (100.0% liked)
GenZedong
4594 readers
77 users here now
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I find it insane how ensuring the input's accuracy is something they balk at.
the appeal aka hype of all the current AI is to they are able to ripped the fruit of free labor by scraping user generated data. Having to curate the data means they will have to spend resources on real employees which will cut into their profit so of course the capitalist gonna cry fault.
Exactly.
It's like uh they could just use verified sources to begin with instead of user edited stuff mainly staffed by the CIA and corporate influence peddlers? Like maybe China has a state encyclopedia, volumes of academic works that have been peer reviewed.
It's maddening that this is seen as a burden. Go back to conceptions of intelligent machines in like the 60s through 80s and many of the futurists there had them being carefully taught by ingesting written works separated into fiction and non-fiction, being taught by teachers, etc. These lazy western companies like chatgpt just want to skip all the hard work of actually making machines that have and can give correct answers, they want to skip to the finish line to collect the money and paper over and correct after the fact to the extent they can models built on falsity only as problems appear. China's approach to make sure the models are built correctly off of only good data to start with will likely be better to manage down the road.