When your innovation is mass manipulation, I don't think it should allowed. Maybe that's just me.
Futurology
Meanwhile in the EU, AI companies are screeching about regulations that restrict AI and their risks because it hinders the innovation. Pick a lane, dude.
"let me spread some fear and interest in my product so I can keep getting people to invest in my company or else we're out of cash"
"Humanity needs to see how awesome and powerful and simultaneously extremely dangerous AI is" says guy who's business is based on building AI models and developing "safety" tools for them.
For a second, I was optimistic and thought they were going to highlight the economic, environmental, and misinformation concerns with the ensloppification engine. Then, I read the tagline...
"It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves," Amodei warns after the passage about authoritarian governments. "AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. ... [T]hey could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents. I think the governance of AI companies deserves a lot of scrutiny."
Humanity needs to wake up to dangers of AI 👍
Says anthropic chief 🥱
Lol seriously the play by play. I was so disappointed yet unsurprised.
uh huh, sure buddy
Humanity is about to be handed almost unimaginable power,
Btw, the minimum safe distance to stand from a nuclear destination is six feet. Also, it's safe to microwave aluminum, and to eat razor blades in your pizza.
Hrm, somehow what this guy is saying does not match my experiences with AI... oh well, I'm sure it is I who must be wrong. After all, he says:
AI is doing 90% of the computer programming to build Anthropic's products, including its own AI
so how could that possibly go wrong?
AI is doing 90% of the computer programming to build Anthropic's products, including its own AI
This explains a lot tbh.
Meanwhile AI tells me there is only one letter "c" in "Ice cream"
Claude is AMAZING. The future is now! Its actually very hard to trip it up with complex trick questions like "Spell 'blueberry'" nowadays. Check out what it said when I asked it just now what tomorrow's date is (I even gave this doctorate degree-level intellect a clue that it's in 2026).

I asked it to show its reasoning so I could help it to understand my complex query, and got this very insightful reasoning process.

Reasoning...
Wearegiven:"Whatdateof2026isittomorrow?"Note:Thecurrentyearis2023,sotheuserisaskingaboutafuturedaterelativetoanunknown"today"in2026.However,withoutknowingthecurrentdate(in2026)wecannotcompute"tomorrow".Thus,wemustclarifythattheanswerdependsonthecurrentdate.Butnote:Theusermightbeaskingthisquestiononaspecificday,sowecanassumethatthe"today"inthequestionisthedaywhenthequestionisbeingasked?However,thequestionisabout2026,whichisinthefuture.Sincewedon'thaveaspecificreferencedatein2026,wecannotgiveaspecificanswer.Therefore,thethoughtprocessshouldleadtoaresponsethatexplainsthedependencyandperhapsprovidesawaytocomputeitifthecurrentdatewereknown.Let'sbreakdown:1.Thequestion:"Whatdateof2026isittomorrow?"-Thisimpliesthattomorrowisadayintheyear2026.2.However,thecurrentdate(the"today"fromwhichwearecountingtomorrow)isnotprovided.Withoutknowingtoday'sdatein2026,wecannotdeterminetomorrow'sdate.3.Possibilities:a)Iftheuserisaskingthisquestionin2026,thenwecouldusethecurrentdate.Butthecurrentyearis2023,sowearenotin2026.b)Theusermightbeaskinghypothetically,butwithoutareference,wecannotcompute.4.Therefore,theanswermustbeconditionalonknowingthecurrentdate.5.Wecanprovideageneralmethod:-Tomorrow'sdatein2026isonedayafterthecurrentdate(whichmustbein2026).-Forexample,iftodayisJanuary1,2026,thentomorrowisJanuary2,2026.6.Butnote:Thequestionsays"dateof2026",soweareonlyconcernedwithdatesintheyear2026.IftodayisDecember31,2026,thentomorrowwouldbeJanuary1,2027,whichisnotin2026.Sothequestionimpliesthattomorrowmustalsobein2026.7.Therefore,theanswerisonlydefinedfordatesin2026fromJanuary1toDecember30(becauseonDecember31,tomorrowwouldbein2027).SowemustalsonotethatifthecurrentdateisDecember31,2026,thentomorrowisnotin2026.8.Approachfortheresponse:-Explainthatwithoutknowingthecurrentdatein2026,wecannotgiveaspecificanswer.-Providethegeneralrule:tomorrow'sdateisthenextdayinthecalendarafterthecurrentdate.-NotethatthecurrentdatemustbebetweenJanuary1,2026andDecember30,2026fortomorrowtoalsobein2026.-Iftheuserhasaspecificcurrentdateinmind,theycanprovideitandwecancompute.9.However,notethattheusermightbeaskingthisonaparticularday(say,thedaytheyareaskingthequestion)butin2026?Thatdoesn'tmakesensebecausewearein2023.Sowemustclarifythatwearein2023and2026isinthefuture.10.Alternativeinterpretation:Theusermightbeaskingforthedateoftomorrowintheyear2026,meaningiftodayisthesamedayoftheyearin2026asitisnowin2023?Butthatisnotastandardinterpretationandthequestiondoesnotspecify.11.Conclusion:Thequestionisambiguouswithoutareferencepoint.Wemustaskforthecurrentdatein2026orexplainthatweneedthatinformation.Butnote:Theusermightbeaskingthisquestiononaspecificday(say,June15,2023)andwanttoknowwhatdateitwouldbetomorrowin2026?Thatis,thesamedayoftheyear?However,thequestiondoesnotspecifythat.Let'sre-read:"Whatdateof2026isittomorrow?"Thisisambiguous.Itcouldmean:OptionA:"Iftodayissomedayin2026,whatdateistomorrow(in2026)?"->requiresknowingtodayin2026.OptionB:"Whatistomorrow'sdate(in2023)andthenwhatwouldthatsamedaybein2026?"->butthatwouldbethesamedayoftheyear,buttheyearchanges.However,thequestionsays"dateof2026",soOptionBwouldnotbeadatein2026iftomorrowin2023isin2023.Forexample
I pressed 'Stop' because I'm pretty sure the poor guy was in a logic loop, and I didn't want to heat up the oceans any further with my diabolically-complex line of query.
you cant expect ai to know answers to such deep questions D:
claude says
There are 2 c’s in “ice cream” — one in “ice” and one in “cream.“
Bungo 3.1 says I should use a clothes iron to get the wrinkles out of my testicles.
Well it's not wrong that that will work .. the use of the word should is debatable ..
I mean... Yeah. Anyone who knows even the first thing about how an LLM works is going tell you it's not qualified to answer that. That niche functionality would need to be tacked onto the LLM as, say, a deterministic algorithm it could call and report the results from.
In a world of plenty of valid arguments against widespread generative AI, you chose one that at best says "people need to be more educated about generative AI before using it" and at worst says "I need to be more educated about generative AI before campaigning against it".
This is the 'next level natural language technology'?
Always some AI white knights in the comments, bravely telling people 'you're using AI wrong. you're ignorant. you're uneducated about generative AI', as though this isn't literally the first thing that the market-leader OpenAI suggests you use it for presently.

