Zhu insists that these ideas are built on sand.
Its called silicon
Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.
Rules:
-- PLEASE KEEP POST TITLES INFORMATIVE --
-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --
-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --
-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today/ . Twitter screenshots still need to be sourced or they will be removed --
-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--
-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--
-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --
-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --
Zhu insists that these ideas are built on sand.
Its called silicon
Large language models, Zhu believes, will never achieve this.
I've been saying this ever since the LLM hype began. What i find particularly worrying is how many educated and supposedly intelligent people fell for the hype. Just because an algorithm can get good at mimicking human language patterns when fed enough data doesn't mean there is actual intelligence behind it. Machine learning is a very useful tool with a lot of applications but LLMs are going to hit a hard wall at some point without real AGI.
Incredible things are happening at BigAI
Jokes aside, I was really intrigued by this a bit longer article (though to me it seemed far from a long read, maybe less than 20 minutes? Time went by really quickly, having had to pause my Star Trek episode of the day 😸). One of the best articles I have yet read from The Guardian 💀💀
I do find Zhu's philosophy interesting, but his dogmatic stance against LLMs or transformer models in general might have become a serious shot in his own foot.
The fact that in 2023 (!) his former mentor Mumford convinced him to shift that stance and to take them into (more or less) honest consideration – his stance having morphed from a categorical refusal of even the mention of them to "They have their place but I believe them to be far from a general solution" – seems to me to be a game changer that might have boosted BigAI potential tremendiously for the future.
But it seems like the PRC is hedging it's bets anyway so I do not think that BigAI only following Zhu's former relatively dogmatic vision would have had much of a negative effect on Chinas machine learning development in general.