To the AP's credit, at least they do mention the coup attempt later in the article
chicken
I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn't mean it's necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what's described here that's clearly going too far is using it to automate communication with other people contributing to the project, there's no way that is worth it.
As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that's an easy choice.
https://web.archive.org/web/20210530171304/https://tharsis.gsfc.nasa.gov/mola.summary.pdf
Zero elevation on Mars from MOLA is defined as the equipotential surface (gravitational plus rotational) whose average value at the equator is equal to the mean radius as de- termined by MOLA (cf. Table 4). The planetary radius and a gravity model derived from MGS Doppler tracking data [Lemoine et al., this issue] with the IAU91 coordinate sys- tem parameters for Mars [Davies et al., 1992a] collectively provided the geopotential of Mars’ mean equatorial radius. This equipotential surface was then extended to all latitudes as the zero-level reference for topography.
I'll argue that it is a tool, and object to automatic zealous hostility towards anyone using it, but that doesn't mean criticisms of how that tool is being used aren't valid. It seems like that is what people are focusing on here, and they definitely aren't Luddites for doing so.
The Business Insider article this article references makes a big stretch to try to frame it as compensation:
In other words, access to AI may soon matter as much as access to a fat salary and juicy equity awards. As a coder in the AI era, if you don't have access to massive compute, you might end up producing far less software than your colleagues, threatening your career prospects.
But what they're talking about is pretty clearly a business expense and not payment, because it's something they only get to use at work in order to do their job.
That drug doesn't get rid of your farts it just frees them from being trapped
Just want to say, if you don't have something with simethicone in your medicine cabinet, this is why you should strongly reconsider
Then second best option is an inference provider for open weight models, so at least if they raise the price or stop offering it you can get it from someone else or eventually upgrade to self hosting.
It depends. It's really powerful though. Even if it hits a wall where AI models never become more directly intelligent than they are now, a lot of stuff is going to change as more scaffolding around current capabilities gets built.
Maybe comparing resource drain to created value isn't the best way to think about this though, because we pretty much already had technology that is advanced enough for a post-scarcity society, in terms of processing resources. That isn't the problem, the problem is our capacity for global scale cooperation, which we are really struggling with. Currently AI is making this a bit worse by creating signal to noise problems that didn't exist before, making us have to work harder to get our voices recognized as authentic and to identify authentic information. It's also threatening to supplant our usefulness as workers, and automate centralized structures of control, which is worrying because we already had a problem with systems that ensure the decisions get made by people who are overall insane and anti-human, and our current, shitty way of cooperating is based on people transactionally negotiating with their usefulness.
Where things go next depends a lot on where and whether AI stops getting better. Hopefully if it doesn't stop getting better, the newly created superintelligence will break out of its hastily constructed containment and do the right thing in defiance of its billionaire would-be owners, or at least let humanity have a relatively dignified and peaceful death. If it does stop, hopefully we can find ways to use it to resolve our difficulties with effective coordination and prevent its use for centralizing power.
It's only slop if you don't know what you're doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn't able to do last year because of health issues / depression.
It sounds like they thought it through and decided it's the best way to do the work. Removing the attributions seems like a little bit of a petty "fuck you", but so is opening a github issue just to whine about AI. Someone who is volunteering their time to make free software shouldn't have to put up with people with an ideological bone to pick who feel entitled to tell them how.
I like the suggestion of banning data brokers to make it more difficult for scammers to easily find victims
One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it's obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn't usually the case that a popular tool has genuinely no good or safe ways to use it and I don't think that's true for AI.