Illegal: the strap on is unlicense
towerful
IDK. It puts them at the forefront of this fight.
If governments successfully prosecute distro maintainers (if they can) for this, then distro maintainers are liable.
And distro maintainers would then have to pursue non-compliant users to cover that liability, or fold.
Which is a huge loss for open source.
Or, there would be a huge legal fight and it turns out that the licence of a distro protects it from its users actions.
Which would be awesome and a massive win. It also makes sense. Nobody is suing an OS maintainer because it was used for a data breach.
And then the governments have to pursue the actual users. Which... is gonna be useless wrt these laws
Yeh, but once they have all the big companies complying then they can go after the little guys.
And if, at any point, a little guy becomes problematic then they can fuck them over with compliance violations.
So yes. Just use services that don't require ages verification.
But once they become large enough or problematic enough, they will get the book thrown at them
I dunno if a "cheap drone" can produce the same magnetic response that a fucking cargo ship can, but it seems extremely unlikely.
And what, you have 2 in the water ahead of you? Is that enough for it to be clear for a cargo ship? They function perfectly all the time and catch every single mine?
What happens when 1 finds a mine? How many extras do you carry? What happens when you run out, "just turn around"?
Drones could probably clear a shipping channel, but at some point and actual ship is going to have to go through it.
And the suez canal was blocked for ages due to 1 ship, even after having been operated for decades. That, except the ship sinks.
Sounds like there needs to be some sort of efficiency department set up
Any trump "I did that" stickers showing up?
Also can't comprehend the "protected" bike lane.
It just looks like more road
Do the level-headed Christians do anything about the non-level-headed Christians?
Cause that's what ACAB is about
WMDs, clearly developing nuclear weapons to target not-yet-conceived babies (because the US doesn't give a fuck about actually birthed babies. Just that every single potential baby is actually birthed).
ChatGPT probably saw "nuclear" in some Google search and prioritised it as a target.
Maybe it's unchristian to be christian?
Like, suicide bombers justifying innocent deaths because "if they were true believers, god would protect them. And if they are truly innocent believers then they get their paradise afterlife"...
As if (let's say all of this is actually true) 1 person gaining eternal bliss because they were collateral doesn't absolutely fucking ruin so many other lives dealing with that loss.
It's all so selfish and self centered
for more context as to why this is not that good of an idea as it may seem - please look at curl project and barrage of false security reports made. and curl is a cli project which just handles many protocols, and does not practically do any processing. firefox and any modern browser is basically a operating system and even a hardware (in forms of wasm allowing you to run arbitrary assembly written in systems programming language).
The difference is that anthropic did due diligence. They narrowed their scope as a research project to just the JS engine. They then fully vetted 1 issue, had a complete minimum-test-case that would reporoduce the issue, then contacted Mozilla with it.
Mozilla then said "send over the rest, vetted or not", who then vetted them all.
Mozilla's findings with the bugs found by Claude were fed back to anthropic to hone the model.
So, it's completely different that someone yeeting a codebase into an LLM and reporting whatever it spewed out.
Anthropic required Claude to actually produce an example exploit of it. And it REALLY struggled to do this. But it did get there eventually, for 1 bug. Which they then reported.
Even then, the exploit was in the JS engine. It wouldn't have escaped the sandbox to actually be an issue. Classic defence-in-depth
Shitty use of AI is fucking horrible. It wastes resources, it wastes time, it wastes energy.
I think this is a responsible use of an LLM in limited scope to produce an actually useful result.
It works out as
O(regex^n)