153
submitted 10 months ago by L4s@lemmy.world to c/technology@lemmy.world

Top physicist says chatbots are just ‘glorified tape recorders’::Leading theoretical physicist Michio Kaku predicts quantum computers are far more important for solving mankind’s problems.

top 50 comments
sorted by: hot top controversial new old
[-] jaden@partizle.com 72 points 10 months ago

A physicist is not gonna know a lot more about language models than your average college grad.

[-] JoBo@feddit.uk 26 points 10 months ago

That's absolute nonsense. Physicists have to be excellent statisticians and, unlike data scientists, statisticians have to understand where the data is coming from, not just how to spit out simple summaries of enormously complex datasets as if it had any meaning without context.

And his views are exactly in line with pretty much every expert who doesn't have a financial stake in hyping the high tech magic 8-ball. On the Dangers of Stochastic Parrots.

[-] jaden@partizle.com 9 points 10 months ago

I had that paper in mind when I said that. Doesn't exhibit a very thorough understanding of how these models actually work.

A common argument is that the human brain very well may work the exact same, ergo the common phrase, "I'm a stochastic parrot and so are you."

load more comments (1 replies)
[-] Jerkface@lemmy.world 6 points 10 months ago

Okay but LLMs have multiplied my productivity far more than any tape recorder ever could or ever will. The statement is absolute nonsense.

[-] JoBo@feddit.uk 6 points 10 months ago* (last edited 10 months ago)

Do you imagine that music did not exist before we had the means to record it? Or that it had no effect on the productivity of musicians?

Vinyl happened before tape but in the early days of computers, tape was what we used to save data and code. Kids TV programmes used to play computer tapes for you to record at home, distributing the code in an incredibly efficient way.

[-] Spzi@lemm.ee 2 points 10 months ago

Kids TV programmes used to play computer tapes for you to record at home, distributing the code in an incredibly efficient way.

Could you expand on this? Sounds interesting.

[-] JoBo@feddit.uk 3 points 10 months ago* (last edited 10 months ago)

They just played the tapes on TV, kinda screechy, computer-y sounds. They'd tell you when to press record on your cassette player before they started. You'd hold it close to the TV speakers until it finished playing, then plug the cassete player in to your computer, and there'd be some simple free game to play. I didn't believe it would work but it did. I still don't believe it worked. But it did.

There must be a clip somewhere on the internet but my search skills are nowhere near good enough to find one.

[-] JoBo@feddit.uk 3 points 10 months ago* (last edited 10 months ago)

[New comment instead of editing the old so that you see it]

I managed to find a video of an old skool game loading. That's what it sounded like when you loaded a program and it's exactly what they'd play on the TV so you could create your tape.

load more comments (3 replies)
[-] DingoBilly@lemmy.world 3 points 10 months ago* (last edited 10 months ago)

Your statement and the original one can both be in sync with another.

Microsoft Word is just a glorified notepad but it still improves my productivity significantly.

And everyone will have different uses depending on their needs. Chatgpt has done nothing for my productivity/usually adds work as I have to double check all the nonsensical crap it gives me for example and then correct it.

[-] fidodo@lemm.ee 5 points 10 months ago

Those are all gross oversimplifications. By the same logic the internet is just a glorified telephone, the computer is a glorified abacus, the telephone is just a glorified messenger pigeon. There are lots of people who don't understand LLMs and exaggerate its capabilities but dismissing it is also bad.

[-] LibertyLizard@slrpnk.net 5 points 10 months ago* (last edited 10 months ago)

I think describing word processors as glorified notepads would also be extremely misleading, to the extent that I would describe that statement as incorrect.

load more comments (2 replies)
load more comments (2 replies)
[-] demesisx@infosec.pub 52 points 10 months ago

Yes. Glorified tape recorders that can provide assistance and instruction in certain domains that is very useful beyond what a simple tape recorder could ever provide.

[-] TropicalDingdong@lemmy.world 26 points 10 months ago

Yes. Glorified tape recorders that can provide assistance and instruction in certain domains that is very useful beyond what a simple tape recorder could ever provide.

I think a good analogue is the invention of the typewriter or the digital calculator. Its not like its something that hadn't been conceived of or that we didn't have correlatives for. Is it revolutionary? Yes, the world will change (has changed) because of it. But the really big deal is that this puts a big bright signpost of how things will go far off into the future. The typewriter led to the digital typewriter. The digital typewriter showed the demand for personal business machines like the first apples.

Its not just about where were at (and to be clear, I am firmly in the 'this changed the world camp'. I realize not everyone holds that view; but as a daily user/ builder, its my strong opinion that the world changed with the release of chatgpt, even if you can't tell yet.), the broader point is about where we're going.

The dismissiveness I've seen around this tech is frankly, hilarious. I get that its sporting to be a curmudgeon, but to dismiss this technology will be to have completely missed what will be one of the most influential human technologies to have been invented. Is this general intelligence? To keep pretending it has to be AGI or nothing is to miss the entire damn point. And this goal post shifting is how the frog gets slowly boiled.

[-] deranger@sh.itjust.works 6 points 10 months ago

I reckon it’s somewhere in between. I really don’t think it’s going to be the revolution they pitched, or some feared. It’s also not going to be completely dismissed.

I was very excited when I started to play with various AI tools, then about two weeks in I realized how limited they are and how they need a lot of human input and editing to produce a good output. There’s a ton of hype and it’s had little impact on the regular persons life.

Biggest application of AI I’ve seen to date? Making presidents talk about weed, etc.

[-] TropicalDingdong@lemmy.world 2 points 10 months ago

I reckon it’s somewhere in between. I really don’t think it’s going to be the revolution they pitched, or some feared. It’s also not going to be completely dismissed.

Do you use it regularly or develop ML/ AI applications?

[-] ekky43@lemmy.dbzer0.com 6 points 10 months ago* (last edited 10 months ago)

Yes. I wrote my masters in engineering about MLAI (before chatgpt and YOLO became popular and viable), and am also currently working with multi-object detection and tracking using MLAI.

It's not gonna be like the invention of the modern computer, but it's probably gonna reach about the same level as Google, or the electronic typing machine.

[-] deranger@sh.itjust.works 3 points 10 months ago

I use some image generation tools and LLMs.

I think it's a safe bet to estimate it will work out to be somewhere in the middle of the two extremes. I don't think AI/ML is going to be worthless, but I also don't think it's going to do all these terrible things people are afraid of. It will find its applications and be another tool we have.

load more comments (2 replies)
[-] whatisallthis@lemm.ee 2 points 10 months ago

Well it’s like a super tape recorder that can play back anything anyone has ever said on the internet.

load more comments (1 replies)
load more comments (1 replies)
[-] trekky0623@startrek.website 43 points 10 months ago* (last edited 10 months ago)
[-] a_spooky_specter@lemmy.world 19 points 10 months ago

He's not even a top physicist, just well known.

[-] Goodman@discuss.tchncs.de 30 points 10 months ago

I wouldn't call this guy a top physicist... I mean he can say what he wants but you shouldn't be listening to him. I also love that he immediately starts shilling his quantum computer book right after his statements about AI. And mind you that this guy has some real garbage takes when it comes to quantum computers. Here is a fun review if you are interested https://scottaaronson.blog/?p=7321.

The bottom line is. You shouldn't trust this guy on anything he says expect maybe string theory which is actually his specialty. I wish that news outlets would stop asking this guy on he is such a fucking grifter.

[-] hoodlem@hoodlem.me 9 points 10 months ago* (last edited 10 months ago)

I wouldn't call this guy a top physicist... I mean he can say what he wants but you shouldn't be listening to him.

Yeah I don't see how he has any time to be a "top physicist" when it seems he spends all his time on as a commenter on tv shows that are tangentially related to his field. On top of that LLM is not even tangentially related.

[-] MooseBoys@lemmy.world 22 points 10 months ago* (last edited 10 months ago)

Leading theoretical physicist Michio Kaku

I wouldn’t listen too closely to discount Neil deGrasse Tyson these days, especially in domains in which he has no qualifications whatsoever.

[-] A2PKXG@feddit.de 18 points 10 months ago

Just set your expectations right, and chat it's are great. They aren't intelligent. They're pretty dumb. But they can say stuff about a huge variety of domains

[-] PixelProf@lemmy.ca 17 points 10 months ago* (last edited 10 months ago)

I understand that he's placing these relative to quantum computing, and that he is specifically a scientist who is deeply invested in that realm, it just seems too reductionist from a software perspective, because ultimately yeah - we are indeed limited by the architecture of our physical computing paradigm, but that doesn't discount the incredible advancements we've made in the space.

Maybe I'm being too hyperbolic over this small article, but does this basically mean any advancements in CS research are basically just glorified (insert elementary mechanical thing here) because they use bits and von Neumann architecture?

I used to adore Kaku when I was young, but as I got into academics, saw how attached he was to string theory long after it's expiry date, and seeing how popular he got on pretty wild and speculative fiction, I struggle to take him too seriously in this realm.

My experience, which comes with years in labs working on creative computation, AI, and NLP, these large language models are impressive and revolutionary, but quite frankly, for dumb reasons. The transformer was a great advancement, but seemingly only if we piled obscene amounts of data on it, previously unspeculated of amounts. Now we can train smaller bots off of the data from these bigger ones, which is neat, but it's still that mass of data.

To the general public: Yes, LLMs are overblown. To someone who spent years researching creativity assistance AI and NLPs: These are freaking awesome, and I'm amazed at the capabilities we have now in creating code that can do qualitative analysis and natural language interfacing, but the model is unsustainable unless techniques like Orca come along and shrink down the data requirements. That said, I'm running pretty competent language and image models on 12GB of relatively cheap consumer video card, so we're progressing fast.

Edit to Add: And I do agree that we're going to see wild stuff with quantum computing one day, but that can't discount the excellent research being done by folks working with existing hardware, and it's upsetting to hear a scientist bawk at a field like that. And I recognize I led this by speaking down on string theory, but string theory pop science (including Dr. Kaku) caused havoc in people taking physics seriously.

[-] Goodman@discuss.tchncs.de 12 points 10 months ago* (last edited 10 months ago)

He is trying to sell his book on quantum computers which is probably why he brought it up in the first place

[-] PixelProf@lemmy.ca 7 points 10 months ago

Oh for sure. And it's a great realm to research, but pretty dirty to rip apart another field to bolster your own. Then again, string theorist...

[-] joe@lemmy.world 5 points 10 months ago

My opinion is that a good indication that LLMs are groundbreaking is that it takes considerable research to understand why they give the output they give. And that research could be for just one prediction of one word.

[-] PixelProf@lemmy.ca 10 points 10 months ago

For me, it's the next major milestone in what's been a roughly decade-ish trend of research, and the groundbreaking part is how rapidly it accelerated. We saw a similar boom in 2012-2018, and now it's just accelerating.

Before 2011/2012, if your network was too deep, too many layers, it would just breakdown and give pretty random results - it couldn't learn - so they had to perform relatively simple tasks. Then a few techniques were developed that enabled deep learning, the ability to really stretch the amount of patterns a network could learn if given enough data. Suddenly, things that were jokes in computer science became reality. The move from deep networks to 95% image recognition ability, for example, took about 1 years to halve the error rate, about 5 years to go from about 35-40% incorrect classification to 5%. That's the same stuff that powered all the hype around AI beating Go champions and professional Starcraft players.

The Transformer (the T in GPT) came out in 2017, around the peak of the deep learning boom. In 2 years, GPT-2 was released, and while it's funny to look back on now, it practically revolutionized temporal data coherence and showed that throwing lots of data at this architecture didn't break it, like previous ones had. Then they kept throwing more and more and more data, and it kept going and improving. With GPT-3 about a year later, like in 2012, we saw an immediate spike in previously impossible challenges being destroyed, and seemingly they haven't degraded with more data yet. While it's unsustainable, it's the same kind of puzzle piece that pushed deep learning into the forefront in 2012, and the same concepts are being applied to different domains like image generation, which has also seen massive boosts thanks in-part to the 2017 research.

Anyways, small rant, but yeah - it's hype lies in its historical context, for me. The chat bot is an incredible demonstration of the incredible underlying advancements to data processing that were made in the past decade, and if working out patterns from massive quantities of data is a pointless endeavour I have sad news for all folks with brains.

load more comments (2 replies)
[-] ClemaX@lemm.ee 14 points 10 months ago

Well, one could argue that our brain is a glorified tape recorder.

[-] LapGoat@pawb.social 6 points 10 months ago

behold! a tape recorder.

holds up a plucked chicken

[-] Feathercrown@lemmy.world 8 points 10 months ago

He's a physicist. That doesn't make him wise, especially in topics that he doesn't study. This shouldn't even be an article.

[-] eestileib@sh.itjust.works 5 points 10 months ago

Kaku is a quack.

[-] Bishma@discuss.tchncs.de 5 points 10 months ago

I call them glorified spread sheets, but I see the correlation to recorders. LLMs, like most "AIs" before them, are just new ways to do line of best fit analysis.

[-] feedum_sneedson@lemmy.world 7 points 10 months ago

That's fine. Glorify those spreadsheets. It's a pretty major thing to have cracked.

[-] Bishma@discuss.tchncs.de 2 points 10 months ago

It is. The tokenization and intent processing are the thing that impress me most. I've been joking since the 90's that the most impressive technological innovation shown on Star Trek TNG was computers that understand the intent of instructions. Now we have that... mostly.

load more comments (3 replies)
[-] sirico@feddit.uk 3 points 10 months ago

Theoretical physicist and a questionable one at that

[-] FlyingSquid@lemmy.world 3 points 10 months ago

More people need to learn about Racter. This is nothing new.

[-] akd@lemm.ee 2 points 10 months ago
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 15 Aug 2023
153 points (78.9% liked)

Technology

55693 readers
2712 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS