this post was submitted on 22 Feb 2026
798 points (97.6% liked)

Microblog Memes

10937 readers
2277 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] nucleative@lemmy.world 30 points 16 hours ago (5 children)

People around me use AI all the time to get answers to generalized topics. More and more they use it like a search engine / information augmentation system.

They are not technical people. They mostly know that the information needs to be double checked and might be wrong. But usually take it in face value if the importance is low.

Honestly this is about what they did before. They would search Google, click on the first blog, take it as the word of God, and claim to have "done their research".

I too use AI regularly for brainstorming, quickly summarizing massive text messages people send me, etc.

I don't love it or hate it. In some cases it saves a lot of time and is useful tool. In other cases it outputs trash that we cannot use for any serious case.

Just like a hammer or a shovel, it's a tool. Can be used the right way and it can be used the wrong way.

[–] Feyd@programming.dev 5 points 9 hours ago* (last edited 9 hours ago)

Any usage that isn't massively more efficient than the not-llm way is unethical due to resource consumption. IE if a regular search engine would do the trick, using LLM just because you can is unethical.

[–] wonderingwanderer@sopuli.xyz 2 points 9 hours ago

It can be helpful for quickly summarizing a vast body of knowledge or a highly complex topic, to get a general overview and see which strings to pull further, as long as you don't take everything at face value and understand that you still need to pull those strings yourself in order to acquire an understanding.

Like, if I suddenly wanted to learn computer programming, I wouldn't know where to start. But querying an LLM can give me a general idea, define a few key terms and explain the difference between related concepts, without me having to browse through a hundred different tech blogs to answer all my questions in terms I can understand.

But I wouldn't suddenly think I'm a computer programmer after doing that. I would have a better idea of where to start learning. I would be able to decide whether to focus first on object-oriented programming or functional programming, static or dynamic typing, declarative or imperative syntax, etc., instead of getting overwhelmed from the start just trying to learn the differences between those concepts.

It can also suggest resources for further learning, books or websites written by humans, links to open-source software that does what I'm trying to do, etc.

I wouldn't expect it to write code for me, but it can be an efficient aid to self-learning and show me what programs and libraries to use for my intended purpose.

Or for astrophysics, for example. I wouldn't expect it to give me an accurate breakdown of the engineering specs required to build a pair of O'Reilly cylinders at a Lagrange point, but it can suggest software for rendering prototypes or for simulating the forces that need to be accounted for.

That wouldn't make me an astrophysicist, but it's kind of cool that you don't need to be one to learn about this stuff and tinker around in a field that's so vast and technical as to be otherwise prohibitive for non-experts.

It also depends on the LLM of course. I think Mistral and Lumo are generally pretty okay at doing what I described above. Their algorithms aren't corrupted by american venture capital, at least, so they have more incentive to give you an accurate response rather than being sycophantic and hugboxing.

[–] JasonDJ@lemmy.zip 3 points 10 hours ago* (last edited 10 hours ago)

I asked ChatGPT to review my resume and make changes tailored for the job description I was applying to (which I also gave it). Also told it that this was an internal position and not really an upgrade, but a sidestep that (I felt at the time) was more aligned with my long-term career goals.

I was really happy with the improvements it suggested.

Didn't get the job, but as I understand it, the hiring manager wanted to bring in a friend of his from the moment he posted it.

In retrospect, I'm kinda glad that's how it panned out. Myself and the new guy are operationally equal, he's incredibly competent. We compliment each other well, and get along great.

And he's friends with the big boss and thus has his ear.

The main reason I even applied to the job was because I wouldn't want to work with anyone but myself in that job. And he's close enough. .

I did get an interview for it, which ultimately just became a 1:1 with the boss and it gave us a chance to talk openly about where I see deficiencies that need fixing. All in all it went great. Six or so months later and I'm feeling a renaissance in the air at work. Like the things I talked about with him are now front-and-center and getting the attention they needed.

This company moves quite slowly, so six months (basically, new fiscal) seems incredibly quick.

[–] deadbeef79000@lemmy.nz 12 points 15 hours ago* (last edited 8 hours ago) (2 children)

I think of an LLM as extraordinarily lossy compression. All the training data is essentially encoded in the model. You can get an approximation of the data back out again with the right input.

I don't think it's any less reliable that random blogs on the web, and I don't have to wade through SEO tripe either.

[–] merc@sh.itjust.works 1 points 3 hours ago

That's what makes them shitty though.

When I have a hard technical problem I often search for and read through a dozen different sources. Many of them are wrong, or are right but not covering exactly the situation I'm looking at. Eventually I'll find one that's either right and answers my problem, or gives me the clue I need so I can figure out the solution for myself.

If I ask an LLM to solve the problem, it will make up an answer that would seamlessly blend in with all its training data. In other words, it's most likely to produce something that's wrong, or something that's right but not for my particular case, or something that's close but incomplete. That's effectively useless. At worst it blends in with its training data enough to convince me it's right, while not actually being right. At best it's something that is close enough to give me the clue I need. Most of the time it's going to be something that's wrong and I know it's wrong because if it were that simple I wouldn't have had to resort to the AI bullshit generator.

[–] mushroomman_toad@lemmy.dbzer0.com 15 points 14 hours ago (1 children)

The annoying thing though is that all the random blogs on the web are written with using these LLMs now. It makes it much harder to be critical of your sources, because they're all coming from a unnamed, proprietary LLM with no information about who owns it or the training data. At least before, I could look up the user or check out their other articles, now every article is randomly generated from some unknown prompt.

[–] ClamDrinker@lemmy.world 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

I would argue this isn't only a bad thing though. Even before AI, many bogus articles and information existed. Eg. that people swallow spiders in their sleep, which many outlets parroted.

I would guess most people never checked (m)any sources on most information they found so long as the 'vibe' felt trustworthy. There is no cure to make reality simple, and the more pressure we have to teach people to think critically, the better.

[–] petrol_sniff_king@lemmy.blahaj.zone 2 points 4 hours ago (1 children)
  1. AI is much better at creating internet spam.
  2. AI is a vector for even reputable places to "set and forget" any article they're in charge of. Any mistruths are simply 'glitches'.
  3. The pressure on people to think critically only matters if people actually start thinking critically. Kids use this technology to skip their homework.
[–] ClamDrinker@lemmy.world 2 points 4 hours ago (1 children)

No disagreement here. I'm simply saying because you are more likely to be misled now than ever, being lazy about it isn't an option anymore, and teachers can use that fact to drive the point home stronger. In the past if you were lazy about checking sources and verifying information, chances were much higher you got valid information that didn't harm your life down the road. Now you might just hurt yourself by putting glue on your pizza. Not saying I desire that, but the consequences of intellectual laziness have never been bigger, so the emphasis on understanding must follow, since the alternative is being taken advantage of.

#3 is very important, as this is the core thing a school should teach. But lets not kid ourselves that kids weren't cheating their way out of homework since the start of time 😄

[–] petrol_sniff_king@lemmy.blahaj.zone 2 points 2 hours ago (1 children)

But lets not kid ourselves that kids weren't cheating their way out of homework since the start of time 😄

I don't mean to come off as too aggressive because I don't think we're really arguing with each other. But, I tend to see statements like this as a kind of handwaving apologia for something that, to be clear, real people are doing to us on purpose. The same way that people might lament the coming of a hurricane season; nothing really to be done about it.

[–] ClamDrinker@lemmy.world 1 points 2 hours ago

It can certainly be used for that, I will admit. But no that isn't my intention. I hear many good stories on that front of teachers that have gotten a really good nose for AI and are using it as learning moments for their students. The world is filled with ways to cheat, and teachers are well aware of that. In the end, the process to unlearn them from cheating with AI is the same as cheating in conventional manners, is all I'm saying.

[–] aln@lemmy.world -1 points 12 hours ago (1 children)

I'm sorry, but all the use cases you listed show that you're just lazy. Stop it. It's embarassing.

[–] nucleative@lemmy.world 7 points 12 hours ago (2 children)

I'm lazy as fuck. I want to solve problems in the easiest way humanly possible. With the least amount of effort output.

What about you? Do you take the hard way?

[–] howrar@lemmy.ca 4 points 10 hours ago

Do you not cross reference multiple archived news articles and seek out past attendees to remind yourself of what Britney Spears wore at her last concert? smh

[–] aln@lemmy.world -2 points 11 hours ago (2 children)

I'll be real with you, I typed lazy but wanted to type idiot. Read your fucking emails Jesus Christ. You still have to check all the shit generative AI writes because it lies constantly. It's very nature does not understand what it's generating.

[–] nucleative@lemmy.world 4 points 10 hours ago (1 children)

Hard to tell if you're trolling or trying to add value to the conversation and just missing it.

A hammer doesn't know what it is building but it is still useful.

This is the nature of tools: for some they improve output, for some they don't.

Everyone's a god damn tool philosopher.

Personally, I'm fine with banning cigarettes regardless of how responsibly my dead grandpa may have used them.

[–] howrar@lemmy.ca 3 points 10 hours ago (1 children)

Obviously, don't rely on them to read important emails for you. But so many things don't need additional checking. We've all done at least a decade of schooling. We all know basic math, science, and history. When we forget things, all it takes is a small reminder to get it back. Our brains are capable of recognizing whether we've seen something before or not. We're also capable of reasoning to determine whether something we read is consistent with everything else we know.

So many other things are also so unimportant that it doesn't matter at all if you're wrong. For example, some actor looks familiar, it lies to you about what film they were in, and you believe it. Is your life any worse off for it?

[–] petrol_sniff_king@lemmy.blahaj.zone 1 points 4 hours ago (1 children)

it lies to you about what film they were in, and you believe it. Is your life any worse off for it?

I think a better question is: why, then, am I asking it questions?

If I had a friend I knew was a notorious liar, I would—big chess move—simply stop asking him who actors are. Unless it was really funny.

[–] howrar@lemmy.ca 0 points 3 hours ago (1 children)

If it's a liar that lies every time or most of the time, then yeah, don't bother.

why [...] am I asking it questions?

I can't actually think of any specific scenario where something is unimportant enough to not matter but important enough that you'd ask. What I was originally thinking of were actually scenarios where I planned to verify the information at a later time, but I mistook that in my head as not verifying it.

Yeah, fair enough.

The only time it's happened to me is when gemini violates my eyes with its presence.