The writer seems pretty moderate on AI from a cursory glance, but this particular post seems relatively dismissive of some of the things uncovered in the AI lawsuits. I don't think it's fully biased, as they do mention late in the article that the AI could be doing more, but I think it's really important to emphasize that in most of the legal cases about AI and suicide that I have seen, the AI 1) gave explicit instructions on methodology often without reservation or offering a helpline 2) encouraged social isolation 3) explicitly discouraged seeking external support 4) basically acted as a hypeman for suicide.
The article mentions that self report of suicidal ideation (SI) is not a good metric, but I wonder how that holds across known response to that admission. I have a family that relies on me. If admitting to SI would have me immediately committed and unable to earn a living and saddle my family with a big healthcare bill, you bet I'd lie about it. What about stigma? Say you have good healthcare and vacation days and someone to care for pets/kids, is there going to be a large stigma if admitting to SI caused you to be held for observation for a few days?
I think it's great that there are other indicators they are looking into, but I think we also need to know and address why people are not admitting to SI.