GamingChairModel

joined 2 years ago

I'm giving some reasons why turning on or off location services at the OS level doesn't appreciably change battery life.

[–] GamingChairModel@lemmy.world 9 points 2 days ago (2 children)

You can turn off higher level location services at the OS level, but at the radio level the cellular network will always need a precise enough location to handle tower handoffs and timing issues between the tower and phone, as well as modern beam forming techniques where the tower "aims" the signal at the phone. The simple act of the phone communicating with a specific tower tells the phone where it is (sometimes with surprisingly high precision).

911/emergency services also use more low level location techniques, but I'm pretty sure those functions don't get called unless you dial an emergency number.

It's not feasible for a mass market consumer product like Starlink.

Why not? That's a service designed to serve millions of simultaneous users from nearly 10,000 satellites. These systems have to be designed to be at least somewhat resistant to unintentional interference, which means it is usually quite resistant to intentional jamming.

Any modern RF protocol is going to use multiple frequencies, timing slots, and physical locations in three dimensional space.

And so the reports out of Iran is that Starlink service is degraded in places but not fully blocked. It's a cat and mouse game out there.

I'd think that there are practical limits to jamming. After all, jamming doesn't just make radio impossible, it just makes the transmitter and receiver need to get closer together (so that their signal strength in that shorter distance is strong enough to overcome the jamming from further away). Most receivers filter out the frequencies they're not looking for, so any jammer will need to actually be hitting that receiver with that specific frequency. And many modern antenna arrays rely on beamforming techniques less susceptible to unintentional interference or intentional jamming that is coming from a different direction than where it's looking. Even less modern antennas can be heavily directional based on the physical design.

If you're trying to jam a city block, with a 100m radius, of any and all frequencies that radios use, that's gonna take some serious power. Which will require cooling equipment if you want to keep it on continuously.

If you're trying to jam an entire city, though, that just might not be practical to hit literally every frequency that a satellite might be using.

I don't know enough about the actual power and equipment requirements, but it seems like blocking satellite communications between satellites you don't control and transceivers scattered throughout a large territory is more difficult than you're making it sound.

90GB of both RAM+NAND combined. I'm guessing most of it is actual persistent storage for all the stuff the infotainment system uses (including imagery and offline map data for GPS, which is probably a big one), rather than actual memory in the sense of desktop computing.

Everything else that you said seems to fit the general thesis that they're making a lot more money selling to AI companies.

If those reasons were still true but the memory companies stood to not make as much money on those deals, I guarantee the memory manufacturers wouldn't have taken the deal. They only care about money, and the other reasons you list are just the mechanisms for making more money.

It's a very common complaint among people administering websites. This particular AI poisoning service seems to be directed at those people.

So maybe it's not the majority of complaints about AI, but it's a significant portion of the complaints about AI from site administrators.

[–] GamingChairModel@lemmy.world 2 points 6 days ago (3 children)

The Fediverse is designed specifically to publish its data for others to use in an open manner.

Sure, and if the AI companies want to configure their crawlers to actually use APIs and ActivityPub to efficiently scrape that data, great. Problem is that there's been crawlers that have done things very inefficiently (whether by malice, ignorance, or misconfiguration) and scrape the HTML of sites repeatedly, driving up some hosting costs and effectively DOSing some of the sites.

If you put Honeypot URLs in the mix and keep out polite bots with robots.txt and keep out humans by hiding those links, you can serve poisoned responses only to the URLs that nobody should be visiting and not worry too much about collateral damage to legitimate visitors.

[–] GamingChairModel@lemmy.world 2 points 6 days ago (2 children)

What's crazy is that they aren't just doing this because they make more money with AI.

No, they really are making more money by selling whole wafers rather than packaging and soldering onto DIMMs. The AI companies are throwing so much money at this that it's just much more profitable for the memory companies to sell directly to them.

[–] GamingChairModel@lemmy.world 12 points 6 days ago

That's why "bullshit," as defined by Harry Frankfurt, is so useful for describing LLMs.

A lie is a false statement that the speaker knows to be false. But bullshit is a statement made by a speaker who doesn't care if it's true or false.

[–] GamingChairModel@lemmy.world 21 points 6 days ago

If I am reading this correctly, anyone who wants to use this service can just configure their HTTP server to act as the man in the middle of the request, so that the crawler sees your URL but is retrieving poison fountain content from the poison fountain service.

If so, that means the crawlers wouldn't be able to filter by URL because the actual handler that responds to the HTTP request doesn't ever see the canonical URL of the poison fountain.

In other words, the handler is "self hosted" at its own URL while the stream itself comes from the same URL that the crawler never sees.

In terms of usage of AI, I'm thinking "doing something a million people already know how to do" is probably on more secure footing than trying to go out and pioneer something new. When you're in the realm of copying and maybe remixing things for which there are lots of examples and lots of documentation (presumably in the training data), I'd bet large language models stay within a normal framework.

 

Curious what everyone else is doing with all the files that are generated by photography as a hobby/interest/profession. What's your working setup, how do you share with others, and how are you backing things up?

view more: next ›