[-] Tyler_Zoro@ttrpg.network 6 points 11 months ago

That's not what Popper is talking about. He's talking about maintaining the option to be intolerant of the act of intolerance, not of people.

[-] Tyler_Zoro@ttrpg.network 2 points 1 year ago

AI bots never had rights to waive. Their work is not their work.

This is only partially true. In the US (which tends to set the tone on copyright, but other jurisdictions will weigh in over time) generative AI cannot be considered an "author." That doesn't mean that other forms of rights don't apply to AI generated works (for example, AI generated works may be treated as trade secrets and probably will be accepted for trademark purposes).

Also, all of the usual transformations which can take work from the public domain and result in a new copyrightable derivative also apply.

This is a much more complex issue than just, "AI bots never had rights to waive."

[-] Tyler_Zoro@ttrpg.network 5 points 1 year ago

Artists, construction workers, administrative clerks, police and video game developers all develop their neural networks in the same way, a method simulated by ANNs.

This is not, "foreign to most artists," it's just that most artists have no idea what the mechanism of learning is.

The method by which you provide input to the network for training isn't the same thing as learning.

[-] Tyler_Zoro@ttrpg.network 3 points 1 year ago

Looking over their concerns, I'm not sure that they have a leg to stand on. The claim they're making is that they've measured an increase in hate-related tweets (I'll take them at their word on this) and then they associate this with Musk taking over.

They present no evidence for this later claim and do not, as far as I can see, make any attempt to compare against increases in hate among other social media platforms.

Grooming, for example, is one topic they covered. But this is a topic that Republicans have been pushing increasingly as election season spins up. Musk didn't cause that, and that kind of nonsense can be found on Facebook and reddit as well.

I'm inclined to sympathize with an underdog nonprofit, but in this case I just can't see why they expected not to get pushback on such poorly grounded claims

[-] Tyler_Zoro@ttrpg.network 2 points 1 year ago

I know it can be hard to have your ideas quedtioned, but at least try to be civil. I never questioned your intentions, yet youre acting like im crazy.

I think that's all you. I have never suggested that you are crazy. I suggested that calling Microsoft software "safe" as opposed to Linux which is, "insecure," sounds like trolling. But that's because it sounds like trolling. No crazy stated or implied.

A walled garden is obviously more secure than an open source project because nobody can even see the code to find vulnerabilities in it.

You should learn more about the world of software. Seriously. Security experts have been reasonably unanimous in their support of the "Many Eyes Make All Bugs Shallow" approach to software security for decades, even while they have criticized it as a mantra that ignores the flaws in a presumption of open source software security.

But just to put it in a simple logically sealed box: Microsoft's source code has been leaked several times, and of course, bad actors probably have gained access to it throughout the years without such public knowledge. This means that the fundamental difference between Microsoft's proprietary codebase and open source codebases is not, cannot be the availability of source code. Rather, it is the ability for independent groups to review the code on an ongoing basis.

When the only difference is independent review, the only possible result is higher security.

I understand that you like horses. You ride one every day, and you might have evwn named your horse. The fact is that its time to buy a car.

None of this constitutes a logical refutation to the examples I provided, which are critical components of modern software development and deployment.

Source: I'm a professional software release engineer who has worked with many of the world's largest corporations.

Quality software costs money

For starters, this is unfounded cargo culting. There is no evidence for this at all. I can point to dozens of very expensive piles of crufty old software that no one should ever go near, and also to some free software that is literally foundational to the modern software world.

Money has nothing to do with the quality of software, but you're also mistaken if you think open source software is free. You can pay IBM millions of dollars for a suite of enterprise-ready open source software. Most of the cost in such software is rarely the software itself. It's services, support, training and customization.

Throwing rocks is also simpler than firing a gun, yet modern militaries arent training slingers anymore

But they are succeeding wildly by using largely open source software running on open hardware for drones, networking, battlefield analysis, logistics, etc.

[-] Tyler_Zoro@ttrpg.network 2 points 1 year ago

I'd buy it...

[-] Tyler_Zoro@ttrpg.network 6 points 1 year ago

Yeah, this is important. Make it a really big number too so that I have to change my password lots of times in a row in order to put it back to what it was. ;)

[-] Tyler_Zoro@ttrpg.network 4 points 1 year ago

They cannot be anything other than stochastic parrots because that is all the technology allows them to be.

Are you referring to humans or AI? I'm not sure you're wrong about humans...

[-] Tyler_Zoro@ttrpg.network 6 points 1 year ago

Clearly the Founding Fathers were not advanced enough to have crafted the US Constitution unaided.

In a sense you are correct. They cribbed from lots of the most well known political philosophers at the time. For example, there are direct quotes from Locke in the Declaration and his influence over the Constitution can be felt clearly.

[-] Tyler_Zoro@ttrpg.network 3 points 1 year ago

What you are describing is true of older LLMs. GPT4, it's less true of. GPT5 or whatever it is they are training now will likely begin to shed these issues.

The shocking thing that we discovered that lead to all of this is that this sort of LLM continues to scale in capabilities with the quality and size of the training set. AI researchers were convinced that this was not possible until GPT proved that it was.

So the idea that you can look at the limitations of the current generation of LLM and make blanket statements about the limitations of all future generations is demonstrably flawed.

[-] Tyler_Zoro@ttrpg.network 1 points 1 year ago* (last edited 1 year ago)

That first one reminded me of a story I heard at a small SF convention in LA back in the '90s.

This writer was working on The Real Ghostbusters (long story behind that name) and in the episode they go in the space shuttle to a space station. Everyone on the space station is a Star Trek character analog and so hilarity ensues as the rest of the episode is just a Star Trek spoof.

One of the characters is based on Janice Rand who, in the show, had a basket-weave hairdo. The writer included in the script a note to the animators about her hair. The animators were Asian and did not know what a basket-weave was, and it being pre-Wikipedia, they just made an assumption. The test animation they got back had the Janice Rand-alike with a basket literally woven into her hair.

They kept it in the final episode, of course.

Edit: found it, at about 2:30 https://www.crackle.com/watch/d4228840-874e-418c-afd7-a9b93146e6ed/the-real-ghostbusters/ain't-nasa-sarily-so

view more: ‹ prev next ›

Tyler_Zoro

joined 1 year ago