77

the-podcast guy recently linked this essay, its old, but i don't think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

top 50 comments
sorted by: hot top controversial new old
[-] dat_math@hexbear.net 48 points 4 months ago

As a REDACTED who has published in a few neuroscience journals over the years, this was one of the most annoying articles I've ever read. It abuses language and deliberately misrepresents (or misunderstands?) certain terms of art.

As an example,

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

The neuronal circuitry that accomplishes the solution to this task (i.e., controlling the muscles to catch the ball), if it's actually doing some physical work to coordinate movement in in a way that satisfies the condition given, is definitionally doing computation and information processing. Sure, there aren't algorithms in the usual way people think about them, but the brain in question almost surely has a noisy/fuzzy representation of its vision and its own position in space if not also that of the ball they're trying to catch.

For another example,

no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain

in any sense?? really? what about the physical sense in which aspects of a visual memory can be decoded from visual cortical activity after the stimulus has been removed?

Maybe there's some neat philosophy behind the seemingly strategic ignorance of precisely what certain terms of art mean, but I can't see past the obvious failure to articulate the what the scientific theories in question purport nominally to be able to access it.

help?

[-] Frank@hexbear.net 17 points 4 months ago

The deeper we get in to it the more it just reads as old man yells at cloud and people who want consciousness to be special and interesting being mad that everyone is ignoring them.

[-] Philosoraptor@hexbear.net 15 points 4 months ago

Yeah, this is just as insane as the people who think GPT is conscious. I've been trying to give a nuanced take down thread (also an academic, with a background in philosophy of science rather than the science itself). I think this resonates with people here because they're so sick of the California Ideology narrative that we are nothing but digital computers, and that if we throw enough money and processing power at something like GPT, we'll have built a person.

load more comments (4 replies)
[-] Frank@hexbear.net 45 points 4 months ago* (last edited 4 months ago)

Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

I mean, protip, if you ask people to discard all of their language for discussing a subject they're not going to be able to discuss the subject. This isn't a gotcha. We interact with the world through symbol and metaphors. Computers are the symbolic language with which we discuss the mostly incomprehensible function of about a hundred billion weird little cells squirting chemicals and electricity around.

Yeah I'm not going to finish this but it just sounds like god of the gaps contrarianness. We have a symbolic language for discussing a complex phenomena that doesn't really reflect the symbols we use to discuss it. We don't know how memory encoding and retrieval works. The author doesn't either, and it really just sounds like they're peeved that other people don't treat memory as an irreducibly complex mystery never to be solved.

Something they could have talked about - Our memories change over time because, afaik, the process of recalling a memory uses the same mechanics as the process of creating a memory. What I'm told is we're experiencing the event we're remembering again, and because we're basically doing a live performance in our head the act of remembering can also change the memory. It's not a hard drive, there's no ones and zeroes in there. It's a complex, messy biological process that arose under the influence of evolution, aka totally bonkers bs. But there is information in there. People remember strings of numbers, names, locations, literal computer code. We don't know for sure how it's encoded, retrieved, manipulated, "loaded in to ram", but we know it's there. As mentioned, people with some training and recall enormous amounts of information verbatim. There are, contrary to the dollar experiment, people who can reproduce images with high detail and accuracy after one brief viewing. There's all kinds of weird eiditic memory and outliers.

From what I understand most people are moving towards a system model - Memories aren't encoded in a cell, or as a pattern of chemicals, it's a complex process that involves a whole lot of shit and can't be discrete observed by looking at an isolated piece of the brain. YOu need to know what the system is doing. To deliberately poke fun at the author - It's like trying to read the binary of a fragmented hard drive, it's not going to make any sense. You've got to load it in to memory so the index that knows where all the pieces of the files are stored on the disk so it can assemble them in to something useful. Your file isn't "stored" anywhere on the disk. Binary is stored on the disk. A program is needed to take that binary and turn it in to readable information. 'We're never going to be able to upload a brain" is just whiney contrarian nonesense, it's god of the gaps. We don't know how it works now so we'll never know how it works. So we need to produce a 1:1 scan of the whole body and all it's processes? So what, maybe we'll have tech to do that some day. maybe we'll, you know, skip the whole "upload" thing and figure out how to hook a brain in to a computer interface directly, or integrate the meat with the metal. It's so unimaginative to just throw your hands up and say "it's too complicated! digital intelligence is impossible!" Like come on, we know you can run an intelligence on a few pounds of electrified grease. That's a known, unquestionable thing. The machine exists, it's sitting in each of our skulls, and every year we're getting better and better at understanding and controlling it. There's no reason to categorically reject the idea that we'll some day be able to copy it, or alter it such a way that it can be copied. It doesn't violate any laws of physics, it doesn't require goofy exists only on paper exotic particles. it's just electrified meat.

Also, if bozo could please explain how trained oral historians and poets can recall thousands of stanzas of poetry verbatim with few or no errors I'd love to hear that, because it raises some questions about the dollar bill "experiment".

[-] Dessa@hexbear.net 35 points 4 months ago

Moreover, we absolutely do have memory. The concept existed before computers and we named the computer's process after that. We have memories, and computers do something that we easily liken to having memories. Computer memory is the metaphor here

[-] Frank@hexbear.net 31 points 4 months ago

Yeah, it's a really odd thing to harp about. Guy's a psychologist, though, and was doing most of his notable work in the 70s and 80s which was closer to the neolithic than it is to modernity. I think this is mostly just "old man yells at clouds" because he's mad that neuroscience lapped psychology a long time ago and can actually produce results.

[-] DamarcusArt@lemmygrad.ml 15 points 4 months ago

Ok, that was great and all, but could you give this short essay again without mentioning any of the brain's processes or using vowels? If you can't, it proves your whole premise is flawed somehow.

[-] Frank@hexbear.net 11 points 4 months ago

Right? This is what happens when you let stem people loose without a humanities person to ride herd on them. Any anthropologist would tell you how silly this is.

[-] plinky@hexbear.net 10 points 4 months ago* (last edited 3 months ago)

You don’t remember the text though, and stanzas recounting can sometimes have word substitutions which fit rhythmically.

If I asked you what is 300th word of the poem, you cannot do it. Computer can. If I start with two words of the verse, you could immediately continue. It’s sequence of words with meaning, outside of couple thousands of competitive pi-memorizers, people cannot remember gibberish, try to remember hash number of something for a day. It’s significantly less memory, either as word vector or symbol vector than a haiku.

Re: language, and how far along did the mechanical analogy took us? Until equations or language corresponding to reality are used, you are fumbling about fitting round spheres in spiral holes. Sure you can use ptoleimaic system and add new round components, or you can realize orbits are ellipses

History of science should actually horrify science bros, 300 years scientists firmly believed phlogiston was the source of burning, 100 years ago aether was all around us, and our brains were ticking boxes of gears, 60 years ago neutrinos didn’t have mass, while dna was happily deterministically making humans. Whatever we believe now is scientific truth by historic precedent likely isn’t (correspondence between model and reality), they are getting better all the time (increasing correspondence), but I don’t know perfect scientific theory (maybe chemistry is sorta solved with fiddling around the edges).

[-] Frank@hexbear.net 19 points 4 months ago

Why would that horrify us? That's how science works. We observe the world, create hypothesis based on those observations, developed experiments to test those hypothesis, and build theories based on whether experimentation confirmed our hypothesis. Phlogiston wasn't real, but the theory conformed to the observations made with the tools available at the time. We could have this theory of phlogiston, and we could experiment to determine the validity of that theory. When new tools allowed us to observe novel phenomena the phlogiston theory was discarded. Science is a philosophy of knowledge; The world operates on consistent rules and these rules can be determined by observation and experiment. Science will never be complete. Science makes no definitive statements. We build theoretical models of the world, and we use those models until we find that they don't agree with our observations.

[-] plinky@hexbear.net 9 points 4 months ago* (last edited 4 months ago)

*because confidently relying on the model (in this case informational) prediction like ooh, we could do brain no problem in computer space, you are not exactly making a good scientific prediction. Good scientific prediction is that model is likely garbage, until proven otherwise, and thus shouldn’t be end all be all.

But then if you take information processing model, what it gives you, exactly, in understanding of the brain? The author contention that it is hot garbage framework, it’s doesn’t fit with how the brain works, your brain is not tiny hdd with ram and cpu, and until you think that it is, you will be searching for mirages.

Yes neural networks are much closer (because they are fucking designed to be), and yet even they has to be force fed random noise to introduce fuzziness in responses, or they’ll do the same thing every time. You reboot and reload neural net, it will do the same thing every time. But brain is not just connections of axons, it’s also extremely complicated state of the neuron itself with point mutations, dna repairs, expression levels, random rna garbage flowing about, lipid rafts at synapses, vesicles missing cause microtubules decided to chill for a day, the hormonal state of the blood, the input from the sympathetic neural system etc

We haven’t even fully simulated one single cell yet.

[-] m532@hexbear.net 15 points 4 months ago

Computers know the 300th word because they store their stuff in arrays, which do not exist in brains. They could also store it in linked lists, like a brain does, but that's inefficient for the silicon memory layout.

Also, brains can know the 300th word. Just count. Guess what a computer does when it has to get the 300th element of a linked list: it counts to 300.

load more comments (8 replies)
[-] Tomorrow_Farewell@hexbear.net 9 points 4 months ago

If I asked you what is 300th word of the poem, you cannot do it. Computer can

I'm sorry, but this is a silly argument. Somebody might very well be able to tell you what the 300th word of a poem is, while a computer that stored that poem as a .bmp file wouldn't be able to (without tools other than just basic stuff that allows it to show you .bmp images). In different contexts we remember different things about stuff.

load more comments (18 replies)
load more comments (2 replies)
[-] TraumaDumpling@hexbear.net 8 points 4 months ago* (last edited 4 months ago)

the point is that humans have subjective experiences in addition to, or in place of, whatever processes we could describe as information processing. since we aren't sure what is responsible for subjective experiences in humans, (we understand increasingly more of the physical correlates of conscious experience, but have no causal theories that can explain how the physical brain-states produce subjectivity) it would be presumptuous of us to assume we can simulate it in a digital computer. It may be possible with some future technology, field of science, or paradigm of thinking in mathematics or philosophy or somwthing, but to assume we can just do it now with only trivial modifications or additions to our theories is like humans of the past trying to tackle disease using miasma theory - we simply don't understand the subject of study enough to create accurate models of it. How exactly do you bridge the gap from objective physical phenomena to subjective experiential phenomena, even in theory? How much, or what kind, of information processing results in something like subjective experiential awareness? If 'consciousness is illusory', then what is the exact nature of the illusion, what is the illusion for the benefit of (i.e. what is the illusion concealing, and what is being kept ignorant by this illusion?) and how can we explain it in terms of physics and information processing?

it is just as presumptuous to assume that digital computers CAN simulate human consciousness without losing anything important, as it is to assume that they cannot.

[-] Parsani@hexbear.net 7 points 4 months ago* (last edited 4 months ago)

Also, if bozo could please explain how trained oral historians and poets can recall thousands of stanzas of poetry verbatim with few or no errors I'd love to hear that, because it raises some questions about the dollar bill "experiment".

Through learned, embodied habit. They know it in their bones and muscles. It isn't the mechanical reproduction of a computer or machine.

Imo I don't think we could ever "upload a brain" and even if we did, it would be a horrific subjective experience. So much of our sense of self and of consciousness is learned and developed over time through being in the world as a body. Losing a limb has a significant impact on someones consciousness, phantom limbs which can hurt, imagine losing your entire body. This thought experiment is still under the assumption that the brain alone is the entire seat of conscious experience, which is doubtful as this just falls into a mind/body dualism under the idea that the brain is a CPU which could be simply plugged into something else.

Could there be an emergent conscious AI at some point? Perhaps, but as far as we can tell it may very well require a kind of childhood and slow development of embodied experience in a similar capacity to how any known lifeform becomes conscious. Not a human brain shoved into a vat.

[-] bumpusoot@hexbear.net 32 points 4 months ago* (last edited 4 months ago)

This essay is ridiculous, it's arguing against a concept that nobody with the minutest understanding or interest in the brain has. He's arguing that because you cannot go find the picture of a dollar bill in any single neuron, that means the brain is not storing the "representation" of a dollar bill.

I am the first to argue the brain is more than just a plain neural network, it's highly diversified and works in ways beyond our understanding yet, but this is just silly. The brain obviously stores the understanding of a dollar bill in the pattern and sets of neurons (like a neural network). The brain quite obviously has to store the representation of a dollar bill, and we probably will find a way to isolate this in a brain in the next 100 years. It's just that, like a neural network, information is stored in complex multi-layered systems rather than traditional computing where a specific bit of memory is stored in a specific address.

Author is half arguing a point absolutely nobody makes, and half arguing that "human brains are super duper special and can never be represented by machinery because magic". Which is a very tired philosophical argument. Human brains are amazing and continue to exceed our understanding, but they are just shifting information around in patterns, and that's a simple physical process.

[-] Frank@hexbear.net 8 points 3 months ago

This whole thing is incredibly frustrating. Like his guy did draw a representation of a dollar bill. It was a shitty representation, but so is a 640x400 image of a Monet. What's the argument being made, even? It's just an empty gotcha. The way that image is stored and retrieved is radically different from how most actual physical computers work, but there is observably an analogous process happening. You point a camera at an object, take a picture, store it to disk, retrieve it, you get an approximation of the object as perceived by the camera. You show someone the same object, they somehow store a representation of that object somewhere in their meat, and when you ask them to draw it they're retrieving that approximation and feeding that approximation to their hands to draw the imagine. I don't get why the guy thinks these things are obviously, axiomatically uncomparable.

[-] DefinitelyNotAPhone@hexbear.net 30 points 4 months ago

Meh, this is basically just someone being Big Mad about the popular choice of metaphor for neurology. Like, yes, the human brain doesn't have RAM or store bits in an array to represent numbers, but one could describe short term memory with that metaphor and be largely correct.

Biological cognition is poorly understood primarily because the medium it is expressed on is incomprehensibly complex. Mapping out the neurons in a single cubic millimeter of a human brain takes literal petabytes of storage, and that's just a static snapshot. But ultimately it is something that occurs in the material world under the same rules as everything else, and does not have some metaphysical component that somehow makes it impossible to simulate using software in much the same way we'd model a star's life cycle or galaxy formations, just unimaginable using current technology.

[-] Formerlyfarman@hexbear.net 8 points 4 months ago* (last edited 4 months ago)

The op its not arguing it has a metaphisical component. Its arguing the structure of the brain is diferent frome the structure of your pc. The metaphor bit is important because all thinking is metaphor with different levels of rigor and abstraction. A faulty metaphor forces you to think the wrong way.

I do disagree with some things, whats a metaphor if not a model? Whats reacting to stimuli if not processing information?

load more comments (2 replies)
[-] SerLava@hexbear.net 7 points 4 months ago

Mapping out the neurons in a single cubic millimeter of a human brain takes literal petabytes of storage, and that's just a static snapshot

I've read long ago that replicating all the functions of a human brain is probably possible with computers around one order of magnitude less powerful than the brain because it's kind of inefficient

[-] bumpusoot@hexbear.net 7 points 4 months ago* (last edited 4 months ago)

There's no way we can know that, currently. The brain does work in all sorts of ways we really don't understand. Much like the history of understanding DNA, what gets written off as "random inefficiency" is almost certainly a fundamental part of how it works.

load more comments (13 replies)
[-] plinky@hexbear.net 6 points 4 months ago

I could describe it as gold hunter with those sluice thingies, throwing water out and keeping gold, there I described short term memory.

shrug-outta-hecks

I don’t disagree it’s a material process, I just think we find most complex analogy we have at the time and take it (as author mentions), but then start taking metaphor too far

[-] Frank@hexbear.net 10 points 4 months ago

Yeah but we, if "we" is people who have a basic understanding of neuroscience, aren't taking it to far. The author is yelling at a straw man, or at lay people which is equally pointless. Neuroscientists don't think of the mind or the brain it runs on as being a literal digital computer. They have their own completely incomprehensible jargon for discussing the brain and the mind, and if this article is taken at face value the author either doesn't know that or is talking to someone other than people who do actual cognitive research.

I'ma be honest, i think there might be some academic infighting here. Psychology is a field with little meanginful rigor and poor explanatory power, while neuroscience is on much firmer ground and has largely upended the theories arising from Epstein's heyday. I think he might be feeling the icy hand of mortality in his chest and is upset the world has moved past him and his ideas.

Also, the gold miner isn't a good metaphor. In that metaphor information only goes one way and is sifted out of chaos. There's no place in the metaphor for a process of encoding, retrieving, or modifying information. It does not resemble the action of the mind and cannot be used as a rough and ready metaphor for discussing the mind.

load more comments (3 replies)
[-] dat_math@hexbear.net 19 points 4 months ago

Did this motherfucker really write more than 4000 words because nobody told them "all models are wrong but some are useful"?

[-] AssortedBiscuits@hexbear.net 16 points 4 months ago

A spectre is haunting Hexbear — the spectre of UlyssesT.

[-] Parzivus@hexbear.net 16 points 4 months ago

We really don't know enough about the brain to make any sweeping statements about it at all beyond "it's made of cells" or whatever.
Also, Dr. Epstein? Unfortunate.

[-] Frank@hexbear.net 7 points 4 months ago

We really do, though. Like we really, really do. Not enough to build one from scratch, but my understanding is we're starting to be able to read images people are forming in their minds, to locate individual memories within the brain, we're starting to get a grasp on how at least some of the sub systems of the mind function and handle sensory information. Like we are making real progress at a rapid pace.

[-] bumpusoot@hexbear.net 10 points 4 months ago

We can't yet really read images people are thinking of, but we have got a very vague technology that can associate very specific brainwave patterns with specific images after extensive training with that specific image on the individual. Which is still an impressive 1% of the way there.

load more comments (3 replies)
[-] AssortedBiscuits@hexbear.net 13 points 4 months ago

On a more serious note, techbros' understanding of the brain as a computer is just their wish to bridge subjectivity and objectivity. They want to be privy to your own subjectivity, perhaps even more privy to your own subjectivity than you yourself. This desire stems from their general contempt for humanity and life in general, which pushes them to excise the human out of subjectivity. In other words, if you say that the room is too hot and you want to turn on the AC, the techbro wants to be able to pull out a gizmo and say, "uh aktually, this gizmo read your brain and it says that your actual qualia of feeling hot is different from what you're feeling right now, so aktually you're not hot."

Too bad for the techbro you can never bridge subjectivity and objectivity. The closest is intersubjectivity, not sticking probes into people's brains.

load more comments (2 replies)
[-] SSJ2Marx@hexbear.net 12 points 4 months ago* (last edited 4 months ago)

This was a really cool and insightful essay, thank you for sharing. I've always conceptualized the mind as a complex physical, chemical, and electrical pattern (edit: and a social context) - if I were to write a sci fi story about people trying to upload their brain to a computer I would really emphasize how they can copy the electrical part perfectly, but then the physical and chemical differences would basically kill "you" instantly creating a digital entity that is something else. That "something else" would be so alien to us that communication with it would be impossible, and we might not even recognize it as a form of life (although maybe it is?).

[-] SSJ2Marx@hexbear.net 12 points 4 months ago

Would you put your brain in a robot body?

load more comments (2 replies)
[-] ashinadash@hexbear.net 10 points 4 months ago

My brain is a pentium overdrive without a fan and I am overheating

[-] TraumaDumpling@hexbear.net 9 points 4 months ago* (last edited 4 months ago)

here are some more relevant articles for consideration from a similar perspective, just so we know its not literally just one guy from the 80s saying this. some cite this article as well but include other sources. the authors are probably not 'based' in a political sense, i do not condone the people but rather the arguments in some parts of the quoted segments.

https://medium.com/@nateshganesh/no-the-brain-is-not-a-computer-1c566d99318c

Let me explain in detail. Go back to the intuitive definition of an algorithm (remember this is equivalent to the more technical definition)— “an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational. This extends to any modern day artificial system we think of as a computer, irrespective of how sophisticated the I/O behavior is. The inputs and output states of an algorithm in computing are specified by us (and we often have agreed upon standards on what these states are eg: voltage lows/highs for Boolean logic lows/highs). If we miss this aspect of computing and then think of our brains as executing algorithms (that produce our intelligence) like computers do, we run into the following -

(1) a computer is anything which physically implements algorithms in order to solve computable functions.

(2) an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.

(3) the specific input and output states in the definition of an algorithm and the arbitrary relationship b/w the physical observables of the system and computational states are specified by us because of our intelligence,which is the result of…wait for it…the execution of an algorithm (in the brain).

Notice the circularity? The process of specifying the inputs and outputs needed in the definition of an algorithm, are themselves defined by an algorithm!! This process is of course a product of our intelligence/ability to learn — you can’t specify the evolution of a physical CMOS gate as a logical NAND if you have not learned what NAND is already, nor capable of learning it in the first place. And any attempt to describe it as an algorithm will always suffer from the circularity.

https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness

And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

In 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.

The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application

https://www.forbes.com/sites/alexknapp/2012/05/04/why-your-brain-isnt-a-computer/?sh=3739800f13e1

Adherents of the computational theory of mind often claim that the only alternative theories of mind would necessarily involve a supernatural or dualistic component. This is ironic, because fundamentally, this theory is dualistic. It implies that your mind is something fundamentally different from your brain - it's just software that can, in theory, run on any substrate.

By contrast, a truly non-dualistic theory of mind has to state what is clearly obvious: your mind and your brain are identical. Now, this doesn't necessarily mean that an artificial human brain is impossible - it's just that programming such a thing would be much more akin to embedded systems programming rather than computer programming. Moreover, it means that the hardware matters a lot - because the hardware would have to essentially mirror the hardware of the brain. This enormously complicates the task of trying to build an artificial brain, given that we don't even know how the 300 neuron roundworm brain works, much less the 300 billion neuron human brain.

But looking at the workings of the brain in more detail reveal some more fundamental flaws with computational theory. For one thing, the brain itself isn't structured like a Turing machine. It's a parallel processing network of neural nodes - but not just any network. It's a plastic neural network that can in some ways be actively changed through influences by will or environment. For example, so long as some crucial portions of the brain aren't injured, it's possible for the brain to compensate for injury by actively rewriting its own network. Or, as you might notice in your own life, its possible to improve your own cognition just by getting enough sleep and exercise.

You don't have to delve into the technical details too much to see this in your life. Just consider the prevalence of cognitive dissonance and confirmation bias. Cognitive dissonance is the ability of the mind to believe what it wants even in the face of opposing evidence. Confirmation bias is the ability of the mind to seek out evidence that conforms to its own theories and simply gloss over or completely ignore contradictory evidence. Neither of these aspects of the brain are easily explained through computation - it might not even be possible to express these states mathematically.

What's more, the brain simply can't be divided into functional pieces. Neuronal "circuitry" is fuzzy and from a hardware perspective, its "leaky." Unlike the logic gates of a computer, the different working parts of the brain impact each other in ways that we're only just beginning to understand. And those circuits can also be adapted to new needs. As Mark Changizi points out in his excellent book Harnessed, humans don't have a portions of the brain devoted to speech, writing, or music. Rather, they're emergent - they're formed from parts of the brain that were adapted to simpler visual and hearing tasks.

If the parts of the brain we think of as being fundamentally human - not just intelligence, but self-awareness - are emergent properties of the brain, rather than functional ones, as seems likely, the computational theory of mind gets even weaker. Think of consciousness and will as something that emerges from the activity of billions of neural connections, similar to how a national economy emerges from billions of different business transactions. It's not a perfect analogy, but that should give you an idea of the complexity. In many ways, the structure of a national economy is much simpler than that of the brain, and despite that fact that it's a much more strictly mathematical proposition, it's incredibly difficult to model with any kind of precision.

The mind is best understood, not as software, but rather as an emergent property of the physical brain. So building an artificial intelligence with the same level of complexity as that of a human intelligence isn't a matter of just finding the right algorithms and putting it together. The brain is much more complicated than that, and is very likely simply not amenable to that kind of mathematical reductionism, any more than economic systems are.

load more comments (25 replies)
[-] GalaxyBrain@hexbear.net 9 points 4 months ago

If our brains were computers we wouldn't have computers.

[-] queermunist@lemmy.ml 9 points 4 months ago

I'm glad he mentioned that we aren't just our brains, but also our bodies and our historical and material contexts.

A "mind upload" would basically require a copy of my entire brain, my body, and a detailed historical record of my life. Then some kind of witchcraft would be done to those things to combine them into the single phenomenal experience of me. Basically:

[-] Frank@hexbear.net 13 points 4 months ago

So, ironically I think the author is falling in to the trap they're complaining about. They're talking about an "upload" as somehow copying a file from one computer to another.

Instead, consider transferring your brain to a digital system a little at a time. Old cells die, new cells are created. Do you ever lose subjective continuity during that process? Let one meat cell die and a digital cell grow. Do you stop being yourself once all your brain cells are digital? Was there ever a loss of phenomenalalaogy?

[-] queermunist@lemmy.ml 8 points 4 months ago* (last edited 4 months ago)

You've completely misunderstood their criticism of mind uploading.

The author asserts that you are not really your brain. If you copied your brain into a computer, that hapless brain would immediately dissociate and lose all sense of self because it has become unanchored from your body and your sociocultural and historical-materialist context.

You are not just a record of memories. You are also your home, your friends and family, what you ate for breakfast, how much sleep you got, how much exercise you're getting on a regular basis, your general pain and comfort levels, all sorts of things that exist outside of your brain. Your brain is not you. Your brain is part of you, probably the most important part, but a computer upload of your brain would not be you.

[-] Philosoraptor@hexbear.net 12 points 4 months ago* (last edited 4 months ago)

You are not just a record of memories. You are also your home, your friends and family, what you ate for breakfast, how much sleep you got, how much exercise you're getting on a regular basis, your general pain and comfort levels, all sorts of things that exist outside of your brain. Your brain is not you.

Embodied cognition. I don't see this as implying that what we're doing isn't computation (or information processing) in some sense. It's just that the way we're doing it is deeply, deeply different from how even neural networks instantiated on digital computers do it (among other things, our information processing is smeared out across the environment). That doesn't make it not computation in the same way that not having a cover and a mass in grams makes a PDF copy of Moby Dick not a book. There are functional, abstract similarities between PDFs and physical books that make them the same "kinds of things" in certain senses, but very different kinds of things in other senses.

Whether they're going to count as relevantly similar depends on which bundles of features you think are important or worth tracking, which in turn depends on what kinds of predictions you want to make or what you want to do. The fight about whether brains are "really" computers or not obscures the deeply value-laden and perspectival nature of a judgement like that. The danger doesn't lie in adopting the metaphor, but rather in failing to recognize it as a metaphor--or, to put it another way, in uncritically accepting the tech-bro framing of only those features that our brains have in common with digital computers as being things worth tracking, with the rest being "incidental."

[-] queermunist@lemmy.ml 7 points 4 months ago* (last edited 4 months ago)

I think I agree.

One metaphor I quite like is the brain as a ball of clay. Whenever you do anything the clay is gaining deformities and imprints and picking up impurities from the environment. Embodied cognition, right? Obviously the brain isn't actually a ball of clay but I think the metaphor is useful, and I like it more than I like being compared to a computer. After all, when a calculator computes the answer to a math problem the physical structure of the calculator doesn't change. The brain, though, actually changes! The computation metaphor misses this.

This is really useful for understanding memory, because every time you remember something you pick up that ball of clay and it changes.

[-] Philosoraptor@hexbear.net 7 points 4 months ago

After all, when a calculator computes the answer to a math problem the physical structure of the calculator doesn't change

What counts as "physical structure?" I can make an adding machine out of wood and steel balls that computes the answer to math problems by shuffling levers and balls around. A digital computer calculates the answer by changing voltages in a complicated set of circuits (and maybe flipping some little magnetic bits of stuff if it has a hard drive). Brains do it by (among other things) changing connections between neurons and the allocation of chemicals. Those are all physical changes. Are they relevantly similar physical changes? Again, that depends deeply on what you think is important enough to be worth tracking and what can be abstracted away, which is a value judgement. One of the Big Lies of tech bro narrative is that science is somehow value free. It isn't. The choice of model, the choice of what to model, and the choice of what predictive projects we think are worth pursuing are all deeply evaluative choices.

[-] Frank@hexbear.net 6 points 4 months ago

In dwarf fortress you can make a computer out of dwarfs, gates, and levers, and it won't change unless the dwarfs go insane from sobriety and start smashing stuff.

load more comments (1 replies)
load more comments (2 replies)
load more comments (2 replies)
load more comments (29 replies)
load more comments (8 replies)
[-] Parsani@hexbear.net 8 points 4 months ago

A sidenote but you may like a book called Action in Perception. It's more of a survey of contemporary cognitive science in relation to perception, but still relevant to perceptual consciousness.

[-] NephewAlphaBravo@hexbear.net 7 points 4 months ago
[-] AssortedBiscuits@hexbear.net 8 points 4 months ago

There would be double the comments if UlyssesT was still here.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 18 May 2024
77 points (100.0% liked)

chapotraphouse

13436 readers
707 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Vaush posts go in the_dunk_tank

Dunk posts in general go in the_dunk_tank, not here

Don't post low-hanging fruit here after it gets removed from the_dunk_tank

founded 3 years ago
MODERATORS