Here's a higher quality version
196
Community Rules
You must post before you leave
Be nice. Assume others have good intent (within reason).
Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.
Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.
Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".
Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.
Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.
Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.
Avoid AI generated content.
Avoid misinformation.
Avoid incomprehensible posts.
No threats or personal attacks.
No spam.
Moderator Guidelines
Moderator Guidelines
- Don’t be mean to users. Be gentle or neutral.
- Most moderator actions which have a modlog message should include your username.
- When in doubt about whether or not a user is problematic, send them a DM.
- Don’t waste time debating/arguing with problematic users.
- Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
- Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
- Ask the other mods for advice when things get complicated.
- Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
- Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
- Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
- Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
- Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
- First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
- Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
- No large decisions or actions without community input (polls or meta posts f.ex.).
- Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
- Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.
What's the "AGI existential risk" one about?
The AGI Liars are well funded by the AI industry. See, if AI is dangerous, that's because it's powerful and useful, which are things the AI industry loves to pretend is true.
it is powerful and useful though, just not in 90% of the use cases they're trying to cram LLMs into
I've heard the "Companys bad obv but Gen AI is useful for specific tasks!" But what tasks is it good at that wouldnt be more efficently done by a neural network or other far more efficent AI (other then writing corperate emails which Ill give you that)
See, if AI is dangerous, that’s because it’s powerful and useful, which are things the AI industry loves to pretend is true.
But what tasks is it good at that wouldnt be more efficently done by a neural network or other far more efficent AI
??
I was just asking what purpose LLMs server compares to just using a specilized diffrent AI since you said their still useful. LLMs need massive databases and energy costs and even then routinely output gibberish so progress to society do they lend?
as someone in close proximity with ai doomers and with a reasonable knowledge of computer science - I don't think AGI is as silly as this meme presents it - however climate change is a much more pressing and encompassing issue that I feel takes precedent over any super intelligence fears.
like the "best" argument I've heard is that climate change isn't as big a deal since super intelligence will solve it but then we'll get subjugated by it - ok but how about it solves our most existential problem first and then we worry about the aftermath?
they all seem to just throw up their hands and say we shouldn't let it save us (if it even could) and just live out the rest of lives since there's nothing we can do about either... weird tho how this argument only ever seems to come from comfortable westerners.. if they were really serious they'd be out there destroying chip fabs in Taiwan..
idk I just think it's the new climate inaction narrative but in a tech doomerist clothing.
The way to solve climate change is to stop carbon emissions. The trick is to convince everyone to do that. No one has written anything down that does that on its own or in aggregate so an LLM cannot regurgitate that answer.
Solving the climate crisis will inevitably involve the working class overthrowing the owner class, because we need power to change the systems of government and business responsible for pollution. The working class is the only class incentivized to fix the climate crisis, because the workers can't all fit in apocalypse bunkers the way the owner class can. And again no one wrote down the thing that will give all workers class consciousness so LLMs can't regurgitate that either.
The LLMs aren't AGI. So the theoretical abilities of AGIs and the consequences of creating AGIs aren't relevant. edit: typos
The goofy part of that argument is assuming there's some miracle solution that we just haven't thought of. There have always been solutions, they just weren't what power hungry imperialists and capitalists wanted to hear. The surest way some super intelligence could save us is by somehow getting our short sighted asses to set aside selfish advantage in favor of common good.
Another flaw is assuming an AI could simply science infinitely faster than we can when science relies on data gathering and double checking work with different methodologies, which would still bottleneck an AI that could instantly data analyze and see insights no one else could.
Such a super intelligence could also be prone to the same logical pitfalls and biases we are, even if it had millions of times the processing power. We already see machine learning producing "errors" similar to human minds, as such networks have a lot of the same limitations and strengths as our own. It could become convinced it's correct and miss insights that contradict its preconceptions, stalling progress instead of helping it.
edit: grammar
AGI is terrifying in the same way that a time machine is terrifying. We should be scared shitless... If it existed. If we had some inkling of how such a thing is even built, we need to start worrying.
Good thing we're about as close to AGI as we are to a time machine. Of course, the LLM pushers who are hundreds of billions in the hole for somehow making a tiny bit of money back, have a half trillion collective dollar motivation to get people to at least consider the AI bubble as useful, and going "oooooh, AGI scaaaary" makes peopl think it's capable of things.
Do you have a link or a higher quality version? This one is almost unreadable.
edit꧇ found it
But my lord…… the jpegs…… they are entirely insufficient……..
The paperclips are a nice touch
unreadable
Global warming is the dinosaurs getting revenge on the planet.
Turns out the black gold really is cursed
Open the image in a new tab and zoom in to at least 240%. It's fine.