I could not disagree harder. Bethesda puts a ton of work into making their games as extensible as possible and I think that's not a deficiency at all.
I think it depends on the project. Some projects are the author's personal tools that they've put online in the off-chance it will be useful to others, not projects they are really trying to promote.
I don't think we should expect that authors of repos go too out of their way in those cases as the alternative would just be not to publish them at all.
My experience has often been the opposite. Programmers will do a lot to avoid the ethical implications of their works being used maliciously and discussions of what responsibility we bear for how our work gets used and how much effort we should be obligated to make towards defending against malicious use.
It's why I kind of wish that "engineer" was a regulated title in America like it is in other countries, and getting certified as a programming engineer required some amount of training in programming ethics and standards.
We'll always DRR DRR !
While that's true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.
I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and -- maybe more importantly -- start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I'm imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.
Could something like that become conscious without realizing it's "communicating" with us? The program executing the LLM might reflexively process data without any concept that it's text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn't realize the data represents a link to other conscious beings.
As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn't understand they were doing math even when they got it "right", but they would still be sentient, if not sapient, despite that.
It's the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.
But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it's own bounds. Something that might not even recognize it's executing a program the same way we aren't consciously aware of the chemical reactions our brain is executing to make us think.
I don't believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven't started to be heavily layered and interconnected the way I think they'll end up.
At the very least it makes for a fun Sci-fi premise.
We really need to start redistributing how we spend money on health care. Public option, lower executive pay. More non-emergency long term facilities for patients with psych issues or rehabilitation, and chronic illness care. Better pay and shorter shifts for doctors and nurses. Subsidies for medical tech companies to offset end-user price. More government-funded research into medical tech.
Health care should realistically be our biggest industry akin to a military with the social status of being a soldier and the compensation of being a software developer. We have the wealth and technology to help most people live healthy lives. We need the government to incentivize allocating it correctly.
Things that amount to "trans people shouldn't exist" or "trans people shouldn't get medical care" are more than just "mean".
Except in a true free market zoning laws wouldn't keep adorable, high density housing from being constructed to artificially boost housing prices.
Other than that I agree with you.
That's why there is an option to disable ads... Everyone wins unless they think this person's work should be distributed for free.
There are lots of comments and posts giving the false impression that Sync is tracking you outside of what is needed to support ads, including posts showing trackers from websites that are linked through lemmy and not part of sync at all (you would get those same trackers just browsing vanilla lemmy and clicking through a link)
You can do your own tracker analysis on the App. When you pay to disable ads all tracking goes away, which lines up with the developers claims that he doesn't even load those libraries through the ad SDK when you aren't on the ad supported version.
And yeah, this is distributed through the play store, if that's an issue for you, you don't need to download it, but like... that's not the misinformation I'm talking about.
IMO FOSS has really great offerings when it comes to libraries or other highly technical code.
But something about either the community or incentive structure results in sub-par UI/UX. Obviously not a rule, but definitely a trend I've noticed.
I think this was an Orville episode, wasn't it?