667
submitted 10 months ago by yesman@lemmy.world to c/memes@lemmy.ml

I think AI is neat.

you are viewing a single comment's thread
view the rest of the comments
[-] Adalast@lemmy.world 11 points 10 months ago

Ok, but so do most humans? So few people actually have true understanding in topics. They parrot the parroting that they have been told throughout their lives. This only gets worse as you move into more technical topics. Ask someone why it is cold in winter and you will be lucky if they say it is because the days are shorter than in summer. That is the most rudimentary "correct" way to answer that question and it is still an incorrect parroting of something they have been told.

Ask yourself, what do you actually understand? How many topics could you be asked "why?" on repeatedly and actually be able to answer more than 4 or 5 times. I know I have a few. I also know what I am not able to do that with.

[-] Daft_ish@lemmy.world 17 points 10 months ago* (last edited 10 months ago)

I don't think actual parroting is the problem. The problem is they don't understand a word outside of how it is organized. They can't be told to do simple logic because they don't have a simple understanding of each word in their vocabulary. They can only reorganize things to varying degrees.

[-] DragonTypeWyvern@literature.cafe 10 points 10 months ago

https://en.m.wikipedia.org/wiki/Chinese_room

I think they're wrong, as it happens, but that's the argument.

[-] Daft_ish@lemmy.world 2 points 10 months ago

I guess, I just am looking at from an end user vantage point. I'm not saying the model cant understand the words its using. I just don't think it currently understands that specific words refer to real life objects and there are laws of physics that apply to those specific objects and how they interact with each other.

Like saying there is a guy that exists and is a historical figure means that information is independently verified by physical objects that exist in the world.

[-] Adalast@lemmy.world 6 points 10 months ago

In some ways, you are correct. It is coming though. The psychological/neurological word you are searching for is "conceptualization". The AI models lack the ability to abstract the text they know into the abstract ideas of the objects, at least in the same way humans do. Technically the ability to say "show me a chair" and it returns images of a chair, then following up with "show me things related to the last thing you showed me" and it shows couches, butts, tables, etc. is a conceptual abstraction of a sort. The issue comes when you ask "why are those things related to the first thing?" It is coming, but it will be a little while before it is able to describe the abstraction it just did, but it is capable of the first stage at least.

[-] KeenFlame@feddit.nu 1 points 10 months ago

Some systems clearly do that though or are you just talking about llms?

[-] Daft_ish@lemmy.world 2 points 10 months ago
[-] KeenFlame@feddit.nu 1 points 10 months ago

It's like saying bro, this mouse can't even type text if I don't use an on screen keyboard

[-] fidodo@lemmy.world -1 points 10 months ago

It doesn't need to understand the words to perform logic because the logic was already performed by humans who encoded their knowledge into words. It's not reasoning, but the reasoning was already done by humans. It's not perfect of course since it's still based on probability, but the fact that it can pull the correct sequence of words to exhibit logic is incredibly powerful. The main hard part of working with LLMs is that they break randomly, so harnessing their power will be a matter of programming in multiple levels of safe guards.

[-] Blackmist@feddit.uk 11 points 10 months ago

I feel that knowing what you don't know is the key here.

An LLM doesn't know what it doesn't know, and that's where what it spouts can be dangerous.

Of course there's a lot of actual people that applies to as well. And sadly they're often in positions of power.

[-] KeenFlame@feddit.nu -1 points 10 months ago

There are more than a couple research agents in development

We need something that can real time fact check without error that would fuck twitter up lol

[-] bruhduh@lemmy.world 7 points 10 months ago

Few people truly understand what understanding means at all, i got teacher in college that seriously thinked that you should not understand content of lessons but simply remember it to the letter

[-] Adalast@lemmy.world 3 points 10 months ago

I am so glad I had one that was the opposite. I discussed practical applications of the subject material after class with him and at the end of the semester he gave me a B+ even though I only got a C by score because I actually grasped the material better than anyone else in the class, even if I was not able to evaluate it as well on the tests.

[-] bruhduh@lemmy.world 1 points 10 months ago

I'm glad for you) out teacher liked to offer discussion only to shoot us down when we tried to understand something, i was like duh that's what teachers are for, to help us understand, if teachers don't do that, then it's the same as watching YouTube lectures

[-] Ramblingman@lemmy.world 4 points 10 months ago

This is only one type of intelligence and LLMs are already better at humans at regurgitating facts. But I think people really underestimate how smart the average human is. We are incredible problem solvers, and AI can't even match us in something as simple as driving a car.

[-] Adalast@lemmy.world 4 points 10 months ago

Lol @ driving a car being simple. That is one of the more complex sensory somatic tasks that humans do. You have to calculate the rate of all vehicles in front of you, assess for collision probabilities, monitor for non-vehicle obstructions (like people, animals, etc.), adjust the accelerator to maintain your own velocity while terrain changes, be alert to any functional changes in your vehicle and be ready to adapt to them, maintain a running inventory of laws which apply to you at the given time and be sure to follow them. Hell, that is not even an exhaustive list for a sunny day under the best conditions. Driving is fucking complicated. We have all just formed strong and deeply connected pathways in our somatosensory and motor cortexes to automate most of the tasks. You might say it is a very well-trained neural network with hundreds to thousands of hours spent refining and perfecting the responses.

The issue that AI has right now is that we are only running 1 to 3 sub-AIs to optimize and calculate results. Once that number goes up, they will be capable of a lot more. For instance: one AI for finding similarities, one for categorizing them, one for mapping them into a use case hierarchy to determine when certain use cases apply, one to analyze structure, one to apply human kineodynamics to the structure and a final one to analyze for effectiveness of the kineodynamic use cases when done by a human. This would be a structure that could be presented an object and told that humans use it and the AI brain could be able to piece together possible uses for the tool and describe them back to the presenter with instructions on how to do so.

[-] exocrinous@lemm.ee 2 points 10 months ago

AI can beat me in driving a car, and I have a degree.

[-] Harbinger01173430@lemmy.world 1 points 10 months ago

Jokes on them. I don't even calculate when I need to parrot. I am beyond such lowly needs.

this post was submitted on 05 Feb 2024
667 points (88.0% liked)

Memes

45873 readers
880 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS