Peanutbjelly

joined 2 years ago
[–] Peanutbjelly@sopuli.xyz 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

will try to take it in good humour, but i love how i got compared to ai, adhd(AuDHD would be the real wombo combo here so you get points), and schizophrenic people.

and i would hope i don't confabulate half as much as an LLM.

although an understanding of the modern situation does require an unfortunately theoretical take, while, unfortunately, there's more noise, and conspiracy theories being socially reified than most people can remember. but i'd like to think i'm weighting this take via the best available expert consensus that i can find and source. biggest 'correction' i'd make is that i was beaten black and blue for waiting outside of the library, which was unrelated to the protest.

if you do actually care, and can handle more than the internet's usual 140 character tweet limit, here's some elaboration.

the 'sycophancy into delusion effect' i refer to can be seen widely reported on most news sites, where chatgpt and the like cause a feedback-loop into a psychotic break. this is one individual and machine, but a group that forgives the same things has the same sycophantic effect. predictive processing and the bayesian brain are leading theories in psychology that work well nested with other leading theories such as global workspace.

that global workspace video is a very recent example with michael levin from tufts, who often works with friston's free energy principle and active inference (included notes in wiki)

friston has hundreds of thousands of citations, if you care about pedigree. i hope i do not poorly capture or inaccurately represent any of their ideas, but if you'd like to drink from the source, you have my full recommendation.

that's where the "saving energy" stuff comes from. while DKE might not perfectly and accurately explain the situation, i'm all for better ways to convey that eco-niche specific intelligence doesn't always transfer, especially if it's 'overfit to a local minima.' otherwise knowing you need high samples to gauge your intelligence in any particular niche is also related to the framework i'm describing. in the bio-world you have overspecialization, like pandas too fit to a specific environment, which may focus on skills that don't transfer outside of that environment. there's a lot more to gain from the full bayesian perspective, but there is a lot to be gained just by looking at how systems can successfully co-construct, and their possible failure states that are inevitable as systems grow apart into new niche environments.

there's actually an interplay between that 'energy saving' property and putting energy back out which can be used to explore the environment, build a more robust model, and survive greater environmental shifts. this is explained in active inference. good, but slightly old textbook on MITpress. lots of other online resources for the curious.

i'm saying that meta-awareness of the failure states in these specific system dynamics could do much more general and robust good for society than being socially pressured into climbing the socio-economic hierarchy as hard as possible.

there's a term for an imagined AI going rogue due to being overfit to a single goal. this is called a 'paperclip maximizer.' i compare the current socio-economic system to that failure. you know, 'capitalism number go up!'

i don't think any studies i've seen disagree with that take, but if there's a relevant expert who's got a strong weighting i'm unaware of, i'm always open to updating my weights.

as for learning yourself into some information bubble, or how someone can hold ridiculous beliefs without the need to question them, such as grand confidence despite low evidence, is often by taking something you have low evidence about, and having high confidence. and then giving it a high weighting. funny enough, friston's dysconnection hypothesis is about framing schizophrenia as precision weighting issues, but i don't think they are the kind i have TY.

mahault has a phd under friston, and her epistemic papers are essential IMO.

so there you have it, the larger environment of my thoughts, largely focused around one of the most cited neuroscience experts of all time, and michael levin who i mentioned is doing some of the coolest current empirical results in modern biology.

i tried, thank you if you got this far. if nothing else, please stay curious, but beware information silos that disable coms completely, or otherwise create barriers to properly comprehending the systems being represented. 'nothing about us without us' is important for a reason.

otherwise, wish i could compress these complex topics into fewer words, but words are a lossy compression format.

[–] Peanutbjelly@sopuli.xyz 1 points 16 hours ago

Love this comment. If anyone knows anything about machine learning or brains, this resembles modal limitations in learning.

A lot of our intelligence is shaped around our sensory experience, because we build tools for thinking via the tools we've already built, ever since baby motorbabbling to figure how our limbs work. Why Hellen Keller had such trouble learning, but once she got an interface she could engage with for communication, things took off.

We always use different tools, but some people don't see colour. This doesn't mean they are stupid when they answer differently in describing a rainbow.

Also why llms struggle with visual/physical concepts if the logic requires information that doesn't translate through text well. Etc.

Point being, on top of how shitty memorization is as the be all end all, learning and properly framing issues will have similar blindspots like not recognizing the anvil cloud.

This is also why people in informational bubbles can confirm their own model from 'learning' over people's lives experiences.

Like most issues, it doesn't mean throwing the baby out with the bathwater, but epistemic humility is important, and it is important not to ignore the possibility of blindspots, even when confidence is high.

Always in context of the robustness of the framing around it, with the same rules applied at that level. Why "nothing about us without us" is important.

But also we gotta stop people giving high confidence to high dissonance problems, and socializing it into law. We should be past the "mmr causes autism" debate by now, but I'm hearing it from the head of health in the USA.

[–] Peanutbjelly@sopuli.xyz 2 points 16 hours ago (3 children)

I could see why you'd say that. Stress creates environments of basic survival, which kills cognitive thought. More immediate survival is more salient.

That being said, if you have access to the internet, you have access to countless free educational tools.

Too much privilege brings sycophantic bubbles of delusion, like billionaires.

Having all the time and money also let's you do a whole thing tank about how to ruin a country to fit your preferences. See the heritage foundation as prime example.

That being said, while it is less easy for the poor, it's still essential to attempt that open mind and learn, so you don't get trapped by a socialized category error applied as fact.

This is where we need predictive processing and the Bayesian brain to understand how beliefs are weighted and compared, and the failure states that might being.

Basically, poor weighting or system communication leads to an over affirmation of something that should have been high uncertainty, if measured from other directions.

Instead of seeing high cognitive dissonance as a sign to align low probability, it gets socialized into acceptance to save the energy of trying to worry about our deal with what, to that system, appears intractable.

DKE is at least useful in framing how each expertise eco-niche is filled with complexity that doesn't Transfer. This is why scientists stict to their expertise, where they have high dimensions of understanding, and low dissonance to uphold.

This can be over-prioritized until no dissonance outside of microscopic niches that act more like data collection than science.

Experts however can work together to find truths that diffuse dissonance generally, to continue building understanding.

If the peasants could socialize that laziness was a lack of meta awareness of the greater dissonance diffusing web of shared expert consensus, instead of laziness being the act of not feeding the socio-economic hierarchy machine, which is famous for maximizing paperclips and crushing orphans.

Pretty sure I got beaten black and blue waiting for library access. Had to protest to keep a library open when I'm gradeschool.

So, growth mindset isn't a privilege, but general access to affordances, pedigree, time, tools, social connections, etc, are all extra hurdles for growth mindset in impoverished places.

If there's no internet access at all, then that's just a disabled system.

Is not static with people, and Issue with growth mindset would just be vulnerability to learning yourself into some information bubble that intentionally cuts off communication, so that you can only use that group as a resource for building your world model, bringing you to where the closed brains go just to save energy, and keeping you there forever.

Groups that are cool with making confident choices fueled by preference in high dissonance spaces. which basically acts like fertile soil for socializing strong cult beliefs and structures.

They also use weird unconscious tools that keep them in the bubble. Listen to almost anyone that's escaped a cult for good elaboration there. Our brains will do a lot to keep us from becoming a social pariah in our given environment we have grown into.

[–] Peanutbjelly@sopuli.xyz 19 points 5 days ago* (last edited 4 days ago) (9 children)

Pro tip, find and listen to the plethora of historians and other experts on the classification and comparison.

Spoilers, MAGA keep following both the nazi and classic cold war Russian tactics for manipulation.

A lot.

Like, constantly. There are also enablers preventing opposition from gaining any ground.

Are they literal clones of Nazis? No, that's impossible in a changing environment. That being said, they sure like to follow the nazi playbook in a way that sets alarms off in a way that would be pretty stupid to not have issues with.

At this point it's "you can't call them fascist/nazi until we are post-gas chamber," and even then you will get people saying it's not the same, for some stupidly specific yet mostly irrelevant differences.

So when the historians all cry "this is some nazi shit," it might be disingenuous to compare it to more frivolous accusations.

Also there are a lot more valid historical comparisons, because the Nazis aren't the only ones to do this shit, but they are a good example of the general shape.

edit: emphasis on cold war russian tactics and forward. putin's russia is not a free democracy, nor a social democratic state. it's more about how you interact with the oligarchy and fuck over all the out-groups that are convenient for your authoritarian rhetoric.

also nazi's were textbook authoritarians. my guy, open any textbook ever on fascism, or just got to the wiki

first line "Fascism (/ˈfæʃɪzəm/ FASH-iz-əm) is a far-right, authoritarian, and ultranationalist political ideology and movement that rose to prominence in early-20th-century Europe.[1][2][3] " next to a picture of hitler.

some of these takes gotta be fakes.

[–] Peanutbjelly@sopuli.xyz 42 points 2 weeks ago

Good, but misses emphasis on market capture, which is where they use existing wealth to undercut a whole ecosystem until they become entrenched, and switching back becomes unfeasable, which is when they take off the friend mask and put on the corpo fascist mask and say "git gud and cry about it, peasants."

Great comic though

[–] Peanutbjelly@sopuli.xyz 19 points 2 weeks ago (3 children)

This is how I've been addressing it. Category error, because the current framing of sports is... Really dumb

Frankly most global level competition is just people flexing how make affordances people have. Imagine trying to ruin people's lives to protect the sacred structure of mild eugenics through some social hierarchy or another.

But if 'fairness' is the goal, then the wealthy would be a much more deserving population to nerf or exclude.

Not that I think sports and competition are not valid forms of practice and fun, but you're not as 'better' as you think because you had the resources to master an eco-niche that doesn't actually do anything other than give you monkey hierarchy feelings. You also shouldn't have the right to exclude people who make it hard to believe in that stupid oversimplified terrain that the preference style was built upon.

But TERFs and other bigots never got anywhere being thoughtful about others or the world they live in.

[–] Peanutbjelly@sopuli.xyz 2 points 3 weeks ago

it sure as hell shouldn't be making any important choices unilaterally.

and people actively using it for things like... face recognition, knowing it has bias issues leading to false-flagging for people with certain skin tone, should probably be behind bars.

although that stuff often feels more intentional, like the failure is an 'excuse' to keep using it. see 'mind-reading' tactics that have the same bias issues but still get officially sanctioned for use. (there's a good rabbit hole there)

it's also important to note that supporters of AI generally have had to deal with moving goalposts.

like... if linux fixed every problem being complained about, but the fact that something else was missing is now the reason linux is terrible, as if their original issue was just an excuse to hate on linux.

both issues of fanboys and haters are bad, and those who want to address reality, continue to improve linux, while recognizing and addressing the problems have to deal with both of those tribes attacking them for either not believing in the linux god, or not believing in the linux devil.

weirdly, actually understanding intelligent systems is a good way to deal with that issue, but unless you people are willing to accept new information that isn't just blind tribal affirmation, they will continue to maximize paperclips, like a paperclip maximizer for whatever momentum is socially salient. tribal war and such.

i just want to... not ignore any part of the reality. be it the really cool new tools^ (see genie 3, which resembles what haters have been saying is literally impossible for a long time)^ but also recognizing the environment we live in. (google is pretty evil, rich people are taking over, and modern sciences have a much better framing of the larger picture that is important for us to socially spread.)

really appreciate your take!

[–] Peanutbjelly@sopuli.xyz -4 points 3 weeks ago

"LLMs are not intelligent because they do not know anything. They repeat patterns in observed data."

we are also predictive systems, but that doesn't mean we are identical to LLMs. "LLMs are not intelligent because they do not know anything." is just not true, without saying humans are not intelligent and do not know anything. there are some unaddressed framing issues in how it's being thought about.

they "know" how to interpret a lot of things in a way that is much more environmentally adaptable than a calculator. language is just a really weird eco-niche, and there is very little active participation, and the base model is not updated as environments change.

this is not saying humans and LLMs are identical, this is saying that instead of the real differences, the particular aspect your are claiming shows LLMs aren't intelligent... is a normal part of intelligent systems.

this is a spot somewhere in between "human intelligence is the only valid shape of intelligence" and "LLMs are literally humans"

as for vocabulary i'm always willing to help for those that can't find or figure out tools to self-learn.

when i talk about 'tribal' aspects, i refer to the collapsing of complexity towards a binary narrative to fit to fit the preferences of your tribe, for survival reasons. i also refer to this as dumb ape brain, because it's a simplification of the world to the degree that i would expect from literal apes trying to survive in the jungle, and not people trying to better understand the world around them. which is important when shouting your opinions to each-other in big social movements. this is actually something you can map to first principles and how we use the errors our models experience in order to notice things, and how we contextualize the sensory experience after the fact. what i mean is, we have a good understanding of this, but nobody wants to hear it from the people who actually care.

'laziness' should be a lack of epistemic vigilance, not a failure to comply to the existing socio-economic hierarchy and hustle culture. i say this because ignorance in this area is literally killing us all, including the billionaires that don't care what LLMs are, but will use every tool they can to maximize paperclips. i'd assume that jargon should at least have salience here... since paperclip maximizing is OG anti-AI talk, but turns out is very important for framing issues in human intelligence as well.

please try to think of something wholesome before continuing, because tribal (energy saving) rage is basically a default on social media, but it's not conducive to learning.

RLHF = reinforcement learning with human feedback. basically upvoting/downvoting to alter future model behaviour, which often leads to sycophantic biases. important if you care about LLMs causing psychotic breaks.

"inter-modal dissonance" is where the different models using different representations make sense of things, but might not match up.

an example is vision = signal saying you are alone in the room

audio signal saying there is someone behind you.

you look behind you, and you collapse the dissonance, confirming with your visual modality whether the audio modality was being reliable. since both are attempting to be accurate, if there is no precision weighting error (think hallucinations) a wider system should be able to resolve whether the audio processing was mistaken, or there is something to address that isn't being picked up via the visual modality (if ghosts were real, they would fit here i guess.)

this is how different systems work together to be more confident about the environment they are both fairly ignorant of (outside of distribution.)

like cooperative triangulation via predictive sense-making.

i promise complex and new language is used to understand things, not just to hide bullshitting (like jordon peterson)

i'd be stating this to the academics, but they aren't the ones being confidently wrong about a subject they are unwilling to learn about. i fully encourage going and listening to the academics to better understand what LLMs and humans actually are.

"speak to your target audience." is literally saying "stay in a confirmation bubble, and don't mess with other confirmation bubbles." while partial knowledge can be manipulated to obfuscate, this particular subject revolves around things that help predict and resist manipulation and deception.

frankly this stuff should be in the educational core right now because knowing how intelligence works is... weirdly important for developing intelligence.

because it's really important for people to generally be more co-constructive in the way they adjust their understanding of things, while resisting a lot of failure states that are actually the opposite of intelligence.

your effort in attempting this communication is appreciated and valuable. sorry that it is very energy consuming, something that is frustrating due to people like jordon peterson or the same creationist cults mired in the current USA fascism problem, who, much like the relevant politicians aren't trying to understand anything, but to waste your energy so they can do what they want without addressing the dissonance. so they can maximize paperclips.

all of this is important and relevant. shit's kinda whack by design, so i don't blame people for having difficulty, but effort to cooperatively learn is appreciated.

[–] Peanutbjelly@sopuli.xyz -3 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

cats also suck at analogies and metaphors, but they still have intelligence.

a rock could not accurately interpret and carry out complex adjustments to a document. LLMs can.

if the rock was... travelling through complex information channels and high-dimensional concept spaces to interpret the text i gave it, and accurately performed the requested task being represented within those words, yeah it might be a little intelligent.

but i don't know any stones that can do that.

or are you referring to the 'stochastic parrot' argument which tries to demonize confabulatory properties of the model, as if humans don't have and use confabulatory processes?

just because we have different tools we use along-side of those confabulatory processes does not mean we are literally the opposite.

or just find some people to be loud with you so you can ignore the context or presented dissonance. this is really popular with certain groups of 'intelligent' humans, which i often lovingly refer to as "cults," which never have to spend energy thinking about the world, cause they can just confabulate their own shared idea of what the world is, and ignore anyone trying to bring that annoying dissonance into view.

also humans are really not that amazingly 'intelligent' depending on the context. especially those grown in an environment that does not express a challenging diversity of views from which to collectively reduce shared dissonance.

if people understood this, maybe we could deal with things like the double empathy problem. but the same social-confirmation modes ensure minority views don't get heard, and the dissonance is just signal that we should collectively get mad at until it's quiet again.

isn't that so intelligent of humanity?

but no, let's all react with aggression to all dissonance that appears, like a body that intelligently recognizes the threat of peanuts, and kills itself. (fun fact, cellular systems are great viewed in this lens. see tufts university and michael levin for some of the coolest empirical results i've ever seen in biology.

we need to work together and learn from our shared different perspectives, without giving up to a paperclip maximizing social confirmation bubble, confabulating a detached delusion into social 'reality.'

to do this, understanding the complex points i'm trying to talk about is very important.

compressing meaning into language is hard when the interpreting models want to confabulate their own version that makes sense, but excludes any of your actual points, and disables further cooperative communication.

i can make great examples, but it doesn't go far if people don't have any knowledge of

-current sociology

-current neuro-psych

-current machine learning

-current biology

-cults and confirmation bubbles, and how they co-confirm their own reality within confabulated complexity.

-why am i trying so hard, nobody is actually reading this, they are just going to skim it and downvote me because my response wasn't "LLMS BAD, LLMS DUMB!"

-i'm tired.

-i appreciate all of you regardless, i just want people to deal with more uncomfortable dissonance around the subject before having such strong opinions.

[–] Peanutbjelly@sopuli.xyz -1 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

They’re just like us and smart!

responding like this after i just explained a bunch of the differences between us and LLMs is kind of dishonest. but you have to make me fit into your model, so you can just ignore my actual point, which was..."LLMs are the opposite of intelligence," which fits the common take in the area that llms are absolutely 'not intelligent' and in no way shape or form similar to our form of intelligence.

i wouldn't say they are "just like us and smart," because that ignores... the whole point i was making in how they are more similar than being presented, but still a different shape.

like saying "animals are just as smart as humans!" humans are idiots when it comes to interpreting many animals, because they often have a very different shape of intelligence. it's not about the animals being stupid, but the animals having their own eco-niche fit, and perspective drawn around that. this is also not me saying "animals have the opposite of intelligence" just because they don't perform human tasks well.

even better once you start talking about the intelligence of cell groups. could you build a functional body with complex co-constructing organs? why are you more stupid than cell cultures? or people just generally have a shitty understanding of what intelligence is.

i disagree with both "LLMs are the opposite of intelligence" and your strawman.

imagine existing outside of tribal binary framing, because you think they don't properly frame or resemble the truth.

[–] Peanutbjelly@sopuli.xyz -1 points 3 weeks ago (4 children)

"they only output something that resembles human language based on probability. That’s pretty much the opposite of intelligence."

intelligence with a different shape =/= the opposite of intelligence. it's intelligence of a different shape.

and humans also can't deal with shit outside of distribution, that's why we rely on social heuristics... that often over-simplify for tribal reasons, where confirmation bubbles can no longer update their models because they are trying to craft an environment that matches the group confabulation, rather than appropriately updating the shared model.

but suggesting AI is actually intelligence of a different shape guarantees downvotes here, because the tribe accepts no deviation, because that would make you an enemy, rather than someone who just... wants a more accurate dialogue around the context.

 

one of my favourite things about AI art and stable diffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?

 

one of my favourite things about stablediffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?

view more: next ›