Peanutbjelly

joined 2 years ago
[โ€“] Peanutbjelly@sopuli.xyz 5 points 1 week ago

Bayesian analysis of complex intelligent systems via friston's free energy principle and active inference? Or machine learning?

Personally love the stuff circling Michael Levin at tufts university. I could also imagine there's a lot of unique model building in different biological/ecological niches.

[โ€“] Peanutbjelly@sopuli.xyz 9 points 1 week ago

read it more as a commentary on passive learning over hands on and thought provoking methods. although this rhetoric is likely often included in the anti academic opinions that seek to damage rather than improve schools, which you refer to.

I wish the Conservatives all understood that their more progressive values are progressive, and when right wing parties will say they are going to 'change' things, they just mean regress and destroy in abject ignorance of any actual thought.

The former interpretation of the comic is definitely important, as learning is actually tied to turning your brain on and interacting with the concept, more than no context single fact retrievals, where most of the question is set up, and your actual interaction with it is minimal.

Although I don't doubt a lack of teachers, schools, or general funding are to blame for the simpler methods. Not that I haven't had a couple teachers who didn't care two cents past the booklets they handed you.

So, your point is valid and important, but there is an important "style" of education issues that is also valid.

[โ€“] Peanutbjelly@sopuli.xyz -2 points 1 week ago (1 children)

i think it's a framing issue, and AI development is catching a lot of flak for the general failures of our current socio-economic hierarchy. also people having been shouting "super intelligence or bust" for decades now. i just keep watching it get better much more quickly than most people's estimates, and understand the implications of it. i do appreciate discouraging idiot business people from shunting AI into everything that doesn't need it, because buzzword or they can use it to exploit something. some likely just used it as an excuse to fire people, but again, that's not actually the AI's fault. that is this shitty system. i guess my issue is people keep framing this as "AI bad" instead of "corpos bad"

if the loom was never invented, we would still live in an oppressive society sliding towards fascism. people tend to miss the forest for the trees when looking at tech tools politically. also people are blind to the environment, which is often more important than the thing itself. and the loom is still useful.

compression and polysemy growing your dimensions of understanding in a high dimensional environment, which is also changing shape, comprehension growing with the erasure of your blindspots. collective intelligence (and how diversity helps cover more blindspots) predictive processing (and how we should embrace lack of confidence, but understand the strength of proper weighting for predictions, even when a single blindspot can shift the entire landscape, making no framework flawless or perfectly reliable.) and understanding how everything we know is just the best map of the territory we've figured out so far. if you want to know judge how subtle but in our face blindspots can be, look up how to test your literal blindspot, you just need 30 seconds a paper with two small dots to see how blind we are to our blindspots. etc.

more than fighting the new tools we can use, we need to claim them, and the rest of the world, away from those who ensure that all tools will only exist to exploit us.

am i shouting to the void? wasting the breath of my digits? will humanity ever learn to stop acting like dumb angry monkeys?

[โ€“] Peanutbjelly@sopuli.xyz 1 points 1 week ago

let's make another article completely misrepresenting opinions/trajectories and the general state of things, because we know it'll sell and it will get the ignorant fighting with those who actually have an idea of what's going on, because they saw in an article that AI was eating the pets.

please seek media sources that actually seek to inform rather than provoke or instigate confusion or division through misrepresentation and disinformation.

these days you can't even try to fix a category error introduced by the media without getting cussed out and blocked from congregate sites because you 'support the evil thing' that the article said was evil, and everyone in the group hates, without even an attempt to understand the context, or what part of the thing is even being discussed.

also, can we talk more about breaking up the big companies so they don't have a hold on the technology, rather than getting mad at everyone who interacts with modern technology?

legit ss bad feels like fighting rightwing misinformation about migrant workers and trans people.

just make people mad, and teach them that communication is a waste of energy.
we need to learn how to tell who is informing rather than obfuscating, through historicity of accuracy, and consensus with other experts from diverse perspectives. not building tribes upon who agrees with us. and don't blame experts for not also learning how to apply a novel and virtually impossible level of compression when explaining their complex expertise, when you don't even want to learn a word or concept. it's like being asked to describe how cameras work, and then getting called an idiot when some analogy used can be imagined in a less useful context that doesn't apply 1:1 with the complex subject being summarized.

outside of that, find better sources of information. fuck this communication disabling ragebait.

cause now just having a history of rebuking this garbage gets you dismissed, because a history of interacting with the topic on this platform is a good enough vibe check to just not attempt understanding and interaction.

TLDR: the quality of the articles and conversation on this subject are so generally ill-informed that it hurts, and obviously trying to craft environments of angry engagement rather than informing.

also i wonder if anyone will actually engage with this topic rather than get angry, cuss me out, and not hear a single thing being communicated.

[โ€“] Peanutbjelly@sopuli.xyz 3 points 2 weeks ago

While LLMs should come with a psychosis exacerbating warning, especially if RLHF sycophancy is bad.

That being said maybe there should be better help for and responses to those prone to psychosis in the first place.

The response (especially in the USA) has long been excessive violence and little chance at deescalating. You can end up with psychosis from a reading of hitchhikers guide to the galaxy, if you overfit to enough sparse connections. ( a precision weighting issue if described by friston's dysconnection hypothesis.)

Anthropocentric bias is a dead end. If AI could recognize and warn for psychosis confirming behaviours, that could help. Not that people with psychosis usually have access to help in the USA. Let's just keep blaming the nearest thing so that we don't have to change our whole system.

[โ€“] Peanutbjelly@sopuli.xyz 4 points 1 month ago (1 children)

Or maybe the solution is in dissolving the socio-economic class hierarchy, which can only exist as an epistemic paperclip maximizer. Rather than also kneecapping useful technology.

I feel much of the critique and repulsion comes from people without much knowledge of either art/art history, or AI. Nor even the problems and history of socio-economic policies.

Monkeys just want to be angry and throw poop at the things they don't understand. No conversation, no nuance, and no understanding of how such behaviours roll out the red carpet for continued 'elite' abuses that shape our every aspect of life.

The revulsion is justified, but misdirected. Stop blaming technology for the problems of the system, and start going after the system that is the problem.

[โ€“] Peanutbjelly@sopuli.xyz 5 points 1 month ago (2 children)

As a peasant I know that professional help is not always available or viable. AI could very well have saved some of my friends who felt they had no available help and took their own lives. That being said, public facing language models should come with a warning for exacerbating psychosis. Notably the sycophantic models like chatgpt.

[โ€“] Peanutbjelly@sopuli.xyz 3 points 1 month ago (1 children)

yes absolutely. few things are binary. it's like people claiming pro-palestine protestors are antisemitic, or trying to take the valid examples of exceptions as an excuse for unrelated bigotry. it adds a lot of noise and makes it hard to navigate,, so a lot of people running on low-dimensional heuristic maps of the situation will lash out and cause legitimate grievance between other people who can or can't contextualize what happened and why. those who can't repeat the cycle, and socialize it.

why russia has such an easy time causing division and self-segregating behaviour. also why anti-intellectualism and self-serving behaviour is bad. we are too hackable in contextually 'noisy' environments, and bad actors love using that to their advantage.

it takes a lot of energy and time to understand how many blindspots we have within our oversimplified prediction of the world, and diverse environments and experiences, both physical and cyberphysical, and how that leads other people to be making different assumptions about what the world actually looks like. this includes our projections and expectations of others, battling our innate predictive modelling and biases/blindspots.

the issue is when an audience is running on those heuristics and making important choices that affect people. being overconfident in your over-binary predictions can cause these damages that cycle a self fulfilling spiral of legitimate grievance. again, an easy fire to stoke.

 

one of my favourite things about AI art and stable diffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?

 

one of my favourite things about stablediffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?

view more: next โ€บ