this post was submitted on 20 Nov 2025
602 points (99.7% liked)

News

36354 readers
2564 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source. Clickbait titles may be removed.


Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.


7. No duplicate posts.


If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners or news aggregators.


All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dejected_warp_core@lemmy.world 17 points 3 months ago (4 children)

, he’s reading off the bottom of his screen.

Aw fuck.

I'm gonna have to ask absolutely bullshit questions in interviews now, aren't I? Do you have any other strategies for how to spot this? I really don't want to drag in remote exam-taking software to invade the applicant's system in order to be assured no other tools are in play.

[–] korazail@lemmy.myserv.one 15 points 3 months ago (2 children)

I'm not in a hiring position, but my take would be to throw in unrelated tools as a question. E.g. "how would you use powershell in this html to improve browser performance?" A human would go what the fuck? A llm will confidently make shit up.

I'd probably immediately follow that with a comment to lower the interviewee's blood pressure like, 'you wouldn't believe how many people try to answer that question with a llm'. A solid hire might actually come up with something, but you should be able to tell from their delivery if they are just reading llm output or are inspired by the question.

[–] Jankatarch@lemmy.world 4 points 3 months ago (1 children)

Be careful tho because if you ask that with enough confidence I would think I am in the wrong.

"Powershell had OOP without me knowing for a few years so maybe it has hidden html usage too. "

[–] korazail@lemmy.myserv.one 6 points 3 months ago

That was my body language cue. An 'umm... 😅' answer is a pass, as well as any attempt to actually integrate disparate tools that doesn't sound like it's being read. The creased eyebrows, hesitation, wtf face, etc is the proof that the interviewee has domain knowledge and knows the question is wrong.

I do think the tools need to be tailored to the position. My example may not have been the best. I'm not a professional front end developer, but that was my theoretical job for the interviewee.

[–] dejected_warp_core@lemmy.world 4 points 3 months ago

It's a fine line to walk, but I see what you're getting at here. I wouldn't want to come across as incompetent either, lest it reflect on the company. Your follow-up remark is brilliant. Delivery is everything, I suppose.

[–] phx@lemmy.world 6 points 3 months ago (2 children)

I wonder if AI seeding would work for this.

Like: come up with an error condition or a specific scenario that doesn't/can't work in real life. Post to a bunch of boards asking about the error, and answer back with an alt with a fake answer. You could even make the answer something obviously off like:

  • ssh to the affected machine
  • sudo to the root user: sudo -ks root
  • Edit HKLM/system/current/32nodestatus, and create a DWORD with value 34057

Make sure to thank yourself with "hey that worked!" with the original account

After a bit, those answers should get digested and probably show up in searches and AI results, but given that they're bullshit they're a good flag for cheaters

[–] calcopiritus@lemmy.world 2 points 3 months ago

Don't have the source on me now, but I read an article that showed it was surprisingly easy. Like 0.01% of content had his magic words, and that was enough to trigger it.

[–] dejected_warp_core@lemmy.world 1 points 3 months ago

There's stuff out there now about how to poison content scrapers that are training AI, so this is absolutely doable on some scale. There are already what I like to call "golden tokens" that produce freaky reliable and stable results every time, and so I think it likely there are counterparts that trigger reliably bad output too. They're just not documented yet.

In a sane world, commercial AI would have legally required watermarks and other quirks that give content away as artificial, every time. Em-dash is probably the closest we have to this right now for text, and likewise for the occasional impossible backdrop or extra fingers on images. You can't stop a lone ranger with a home-rolled or Chinese model, but it would be a start.

[–] damnedfurry@lemmy.world 5 points 3 months ago* (last edited 3 months ago)

I've never used AI for interview stuff, beyond a little thing that gave me sample questions and assessed my recorded verbal response, to use as prep before an interview, but in reading that, I remembered that Nvidia has a thing where a visual effect will make your eyes look like you're looking straight into the camera all the time (unless they're totally closed of course), and imagined this type of person using that as further subterfuge during the interview, to conceal the 'looking down'.

Luckily, the average person leaning completely on AI for an interview is not nearly savvy enough for this sort of thing, in my experience.

[–] UnderpantsWeevil@lemmy.world 4 points 3 months ago (3 children)

Literally include "Can you name four basic SQL commands?" any time I interview someone and it's a great litmus test.

[–] dejected_warp_core@lemmy.world 2 points 3 months ago

I appreciate the use of a good old-fashioned shibboleth like this. Thanks.

[–] ThirdConsul@lemmy.ml 2 points 3 months ago* (last edited 3 months ago) (1 children)

I'm a software engineer with 15+ years of experience and this question had me stumped.

Select insert update delete?

Create alter drop rollback?

Or did you mean types of commands? But of those there are 5?

Or is that question supposed to get garbage response?

God it's late.

[–] UnderpantsWeevil@lemmy.world 3 points 3 months ago* (last edited 3 months ago)

Select insert update delete?

Nailed it

Although I'm willing to accept "CREATE, DROP, TRUNCATE" in the mix of for no order reason that it shows they know basics.

[–] partial_accumen@lemmy.world 1 points 3 months ago (2 children)

I'm not following, wouldn't an LLM be able to easily answer that one?

[–] IronBird@lemmy.world 3 points 3 months ago (1 children)

knowing absolutely nothing about this topic, i would assume an actual competent person would be able to answer them immediately and confidently, someone reading an LLM prompt is probably sounds like they're reading from a script even if the answers arent wrong

[–] partial_accumen@lemmy.world 3 points 3 months ago

i would assume an actual competent person would be able to answer them immediately and confidently,

People aren't always able to regurgitate encyclopedic knowledge in interviews. Sure some can, but many have anxiety about interviews in general, or stuff going on in their lives which can make them not the sharpest when hit with a rando question like this. There are some absolutely brilliant people I've hired that would fail miserably if this was how they were measured.

Some people work better with scenario based questions instead of bulleted memorized answers. Honestly, I'd much rather have a candidate that knows the concept being discussed even if they can't remember the exact name of a term or the name of a flag they'd need to include when issuing a command. Those last things can be googled in the moment. Conceptual knowledge and understanding is much more important to me than wrote memorization.

someone reading an LLM prompt is probably sounds like they’re reading from a script even if the answers arent wrong

Well, thats what I experienced from my original post, but I'm not sure it will always be that. Someone more clever could take the answer from the LLM and paraphrase it, or put it in their own words and sound competent.

[–] UnderpantsWeevil@lemmy.world 2 points 3 months ago (1 children)

Not in an in person interview

[–] FosterMolasses@leminal.space 1 points 3 months ago

I was confused as to why no one suggested this yet lol