Probably not, sounds terrifying.
chicken
A rule of thumb I think is good for most sorts of investment is, what choice can you feel good about making whether or not it works out? I can handle not getting 1k, but I would feel like a real chump missing out on an easy 1m without giving my best effort. If I pick just the mystery box and win, I feel like that win is deserved. If I pick just the mystery box and I walk away with nothing, then at least I don't have to live with the shame of being a 2-boxer, which is more valuable than $1k. If I pick both boxes, I most likely get a little bit of money and a lifetime of bitter regrets, or in the less likely case get 1.001 million dollars and a sense of having barely avoided disaster and not really "deserving" it. Choosing only the mystery box is the clear choice because it is the choice I am more able to handle having made, on an emotional level.
If you are in the US, and the risk you're concerned about is getting in trouble, yes it is enough, provided you use it correctly. The only real risk is that copyright trolls will scrape your IP while you are torrenting along with the rest of a big list and then automatically send complaints to your ISP, which may then send you a threatening email, or shut off your internet if it happens enough times. The fact that this is the only action they are taking against consumer level pirates means that if your home IP is not itself available to torrent peers, you are entirely immune from anything happening.
Just make sure to bind your torrent client to your VPN, this is the accepted way of safely ensuring your IP cannot leak due to your VPN losing connection.
Afaik it is anonymous (to other users if not to the devs, I also haven't played the sequel), though not entirely public as there's some opaque mechanism determining what you see or don't see, and content isn't visible to people who don't have the game. Have you thought about strategies for sibyl resistance? This is a big thing I think it gets right, there is a built in filter, and simultaneously little incentive to maliciously bypass it.
Both incidentally categories where I will never be happy with slopcode.
The point here isn't necessarily that any particular use of LLMs is a good tradeoff (I can accept that many will not be especially when security and correct operation is very important), just that quantity clearly matters, to contradict the point you were making earlier that it doesn't.
We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?
Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?
I think it's a mistake to consider all LLM usage as one thing, and that thing as some kind of sin to be denounced as a whole rather than in part, and not considered beyond thinking of ways to get rid of it (which is effectively impossible). There were people who had this attitude towards for example electricity, which is actually very dangerous when misused and caused lots of fires and electrocutions, but the way those problems eventually got mitigated was by working out more sensible ways to use it rather than returning to an off-grid world.
I don't think they are even going to allow them to use these credits at home honestly, the whole idea is just that being able to claim that a previous job gave you $X in AI credits is valuable for a resume and and so counts as compensation. They aren't even talking about AI companies themselves doing this, it's speculation about other companies spending 100k a year per worker on AI and why that would be worth it. Kind of what you would expect from an article that is mostly about things people said on LinkedIn I guess.
One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it's obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn't usually the case that a popular tool has genuinely no good or safe ways to use it and I don't think that's true for AI.
To the AP's credit, at least they do mention the coup attempt later in the article
I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn't mean it's necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what's described here that's clearly going too far is using it to automate communication with other people contributing to the project, there's no way that is worth it.
As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that's an easy choice.
I looked at the transcript of the youtube video to see the context of that quote and it's honestly even worse than they're making it out to be: