Well not exactly but completely misunderstood.
Everyone who actually knows about AI is familiar with the alignment and takeoff problems.
(Play this if you need a quick summary
https://www.decisionproblem.com/paperclips/index2.html
)
So whenever someone says, we are making AI, the response should be “oh fuck no” (using bullets and fire if required)
New tagging and auto-completion is fine (there is probably a whole space of new tools that can come out of the AI research field; that doesn’t risk human extinction)