Reporters may use AI tools vetted and approved for our workflow to assist with research, including navigating large volumes of material, summarizing background documents, and searching datasets. Even then, AI output is never treated as an authoritative source. Everything must be verified.
When we attribute a statement, a position, or a quote to a named source, that material comes from direct engagement with interviews, transcripts, published statements, or documents reviewed by the reporter. AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views.
How do you control „Everything must be verified” specifically? I’m pretty sure similar rules must have been in place when they published a hallucinated interview.
tl;dr - don’t quote or link to Arse Technica because it could be made up entirely