this post was submitted on 23 Feb 2026
58 points (84.5% liked)

Technology

81869 readers
4956 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] ICastFist@programming.dev 20 points 1 day ago (1 children)

Alternative title: "We found a random substack and decided to correlate with market fluctuation because it seemed like a very good idea to drive clicks!"

[–] phoenixz@lemmy.ca 2 points 1 day ago

Ding ding ding

Yay for clickbait bullshit articles

[–] degenerate_neutron_matter@fedia.io 41 points 2 days ago (1 children)

It's a sure sign of a healthy non-bubble economy when a random Substack post can cause a stock market crash.

[–] Iconoclast@feddit.uk 8 points 1 day ago

Is the stock market crash in the room with us?

[–] rimu@piefed.social 25 points 2 days ago (2 children)

Really? Looks like a normal day at the office to me:

image

[–] XLE@piefed.social 5 points 1 day ago (1 children)

Do you not simply believe the assumption AI will be super powerful any day now, titled ~~"AI 2027"~~ "The 2028 Global Intelligence Crisis"?

It's gonna happen for real this time. ~~Cryptocurrency~~ ~~NFTs~~ AI and cryptocurrency will upend the market with how incredible they have been.

Then, agentic commerce, coupled with stablecoins, gets rid of transaction fees and upends the business models of payment processors like Mastercard and card-focused banks like American Express.

/s

[–] IratePirate@feddit.org 1 points 1 day ago

It’s gonna happen for real this time. Cryptocurrency NFTs AI and cryptocurrency will upend the market with how incredible they have been.

You forgot the Metaverse. (Like you should.)

[–] meco03211@lemmy.world 13 points 2 days ago* (last edited 1 day ago) (1 children)

Wait a minute. Is the DOW not "OVER 50,000 DOL... 50,000. IT'S OVER 50,000!" ??

[–] criss_cross@lemmy.world 2 points 1 day ago

Aw shit does this mean crime isn’t legal anymore?

[–] ButtermilkBiscuit@feddit.nl 16 points 2 days ago

2 to 4 years out guys I'm seriously this time. Gaaaaaawwwww.

[–] ToTheGraveMyLove@sh.itjust.works 8 points 2 days ago* (last edited 2 days ago) (1 children)

Good, fuck the stock market. Let it all crash and get people hungry, maybe then we'll finally eat the rich, if that doesn't do it I dong know what will

[–] KeenFlame@feddit.nu -5 points 1 day ago

You are lazy

[–] panda_abyss@lemmy.ca 5 points 2 days ago

I’m pretty sure the 1% drop is attributable to Trump pushing more tariffs. 

[–] morto@piefed.social 4 points 2 days ago

From ai 2027 to this, it seems that even hyped ai predictions are getting humbled down

[–] wobblyunionist@piefed.social 3 points 2 days ago (1 children)

Silly, AI doesn't even work, devs are being rehired to fix their mistakes, blows the whole theory up before it starts.

[–] Iconoclast@feddit.uk -1 points 1 day ago (2 children)

AI is a broad category of systems, not any one thing. "AI doesn't work" is like saying "plants taste bad"

[–] XLE@piefed.social 1 points 1 day ago (1 children)

You know exactly what we're talking about when we look at this article and say "AI doesn't work." If you want to feign outrage, save it for the tech companies that muddy the waters.

[–] Iconoclast@feddit.uk 1 points 1 day ago* (last edited 1 day ago) (1 children)

Even if someone's inaccurately using "AI" as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.

One spitting out false information isn't a sign they're not working. That's not what LLMs are designed for. They're chatbots - not generally intelligent systems. They don't think - they talk.

[–] XLE@piefed.social 1 points 1 day ago (1 children)

If you can understand the sentence "AI doesn't work" is about LLMs, surely you can also understand that not working is synonymous for returning incorrect outputs.

I have literally no idea what else you'd be arguing. Its ability to generate words? Everybody knows it can do that

[–] Iconoclast@feddit.uk 0 points 1 day ago (1 children)

The vast majority of people aren't educated on the correct terminology here. They don't know the difference between AI, LLM, AGI, ASI, etc. That makes it near impossible to have real discussions about AI - everyone's constantly talking past each other and using the same words to mean completely different things.

My original comment wasn't even challenging their claim that "AI doesn't work." I was just pointing out that AI and LLM aren't synonymous. It's my one-man fight against sloppy, imprecise use of language. I'd rather engage with what people are actually saying, not with what I assume they're saying.

When it comes to LLMs, it's not just a "word generator." It's a system that generates natural-sounding language based on statistical probabilities and patterns. In other words: it talks. That's all. Saying an LLM "doesn't work" because it spits out inaccurate info is like saying a chess bot doesn't work because it can't play poker. No - that's user error. They're trying to use the tool for something it was never designed to do.

[–] XLE@piefed.social 1 points 1 day ago (1 children)

To belabor the chess analogy: I would say a chessbot didn't work if it randomly caused pieces to appear. Or if it made exceedingly lousy moves. You'd apparently say it was working because it technically changed the board.

Literally nobody is saying the token predictor isn't predicting token. It's just predicting wrong token, which normal people call "not working," while tech evangelists prefer to call it "hallucination" or "misalignment" depending on the narrative they're aiming for.

[–] Iconoclast@feddit.uk 1 points 1 day ago (1 children)

The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it's saying, it's working - even if the content of what it says is factually inaccurate.

[–] XLE@piefed.social 1 points 1 day ago

Accuracy is the only thing people want, and the only thing AI companies talk about. The text has already legible, and it's been that way for years. I think you're alone on your quest to lower the bar for the word "works"

When they are pushing any plant, for everything, everywhere disregarding concerns for quality or appropriateness yeah plants taste bad, and will probably kill a few people