Perspectivist

joined 1 week ago
[–] Perspectivist@feddit.uk 0 points 3 hours ago (1 children)

Trust what? I’m simply pointing out that we don’t know whether he’s actually done anything illegal or not. A lot of people seem convinced that he did - which they couldn’t possibly be certain of - or they’re hoping he did, which is a pretty awful thing to hope for when you actually stop and think about the implications. And then there are those who don’t even care whether he did anything or not, they just want him convicted anyway - which is equally insane.

Also, being “on the list” is not the same thing as being a child rapist. We don’t even know what this list really is or why certain people are on it. Anyone connected to Epstein in any capacity would dread having that list released, regardless of the reason they’re on it, because the result would be total destruction of their reputation.

[–] Perspectivist@feddit.uk 1 points 9 hours ago

My word filters reliably block 3rd of my front page here. Includes every keyword seen here and much more.

[–] Perspectivist@feddit.uk 1 points 1 day ago* (last edited 1 day ago)

Well, I didn’t literally mean there’d be just a single place. Obviously, once you set that precedent, other places like it would start popping up too. But it’s not obvious to me that this is a bad thing - or at least worse than the alternatives. I don’t think there’s anything inherently wrong with people wanting to live among like-minded people and, in effect, build echo chambers for themselves.

I do think the philosophy behind it is immoral from my perspective, but that’s not really the point. What matters is the concrete effect this ideology has in the real world. And if, in our current cities, we have racists committing racist violence against minorities, then is it really so much worse to just let them all move off into their own little enclave where they can live out their perfect lives without any black people around, if that’s what they want? At least then the rest of us wouldn’t have to deal with them on a daily basis.

[–] Perspectivist@feddit.uk 3 points 1 day ago* (last edited 1 day ago)

Way to move the goalposts.

If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.

Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.

So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.

[–] Perspectivist@feddit.uk 5 points 1 day ago (1 children)

Well let's hear some suggestions then.

[–] Perspectivist@feddit.uk -2 points 1 day ago (3 children)

I think they should be given a pass - same way women-only gyms get one. That hypothetical person of color has a million places they can go and just one where they’re not welcome. Seems like a fair trade to me.

[–] Perspectivist@feddit.uk 8 points 1 day ago

Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.

What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?

Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.

Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.

Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.

[–] Perspectivist@feddit.uk 3 points 1 day ago* (last edited 1 day ago) (3 children)

It won’t solve anything

Go tell that to AlphaFold which solved a decades‑old problem in biology by predicting protein structures with near lab‑level accuracy.

[–] Perspectivist@feddit.uk 7 points 2 days ago (2 children)

Stay strong, brother. Out of solidarity, I’m going to go label a few more things with my new white paint marker - purely out of spite.

[–] Perspectivist@feddit.uk 3 points 2 days ago

I mean - it’s certainly possible, but you’d still be risking that 500k prize if you got caught.

And most people seem to tap out because of loneliness or starvation, so if you were going to cheat, you’d pretty much have to smuggle in either food or a better way of getting it - like a decent fishing rod and proper lures.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›