GamingChairModel

joined 2 years ago
[–] GamingChairModel@lemmy.world 1 points 38 minutes ago (1 children)

The Fediverse is designed specifically to publish its data for others to use in an open manner.

Sure, and if the AI companies want to configure their crawlers to actually use APIs and ActivityPub to efficiently scrape that data, great. Problem is that there's been crawlers that have done things very inefficiently (whether by malice, ignorance, or misconfiguration) and scrape the HTML of sites repeatedly, driving up some hosting costs and effectively DOSing some of the sites.

If you put Honeypot URLs in the mix and keep out polite bots with robots.txt and keep out humans by hiding those links, you can serve poisoned responses only to the URLs that nobody should be visiting and not worry too much about collateral damage to legitimate visitors.

What's crazy is that they aren't just doing this because they make more money with AI.

No, they really are making more money by selling whole wafers rather than packaging and soldering onto DIMMs. The AI companies are throwing so much money at this that it's just much more profitable for the memory companies to sell directly to them.

That's why "bullshit," as defined by Harry Frankfurt, is so useful for describing LLMs.

A lie is a false statement that the speaker knows to be false. But bullshit is a statement made by a speaker who doesn't care if it's true or false.

If I am reading this correctly, anyone who wants to use this service can just configure their HTTP server to act as the man in the middle of the request, so that the crawler sees your URL but is retrieving poison fountain content from the poison fountain service.

If so, that means the crawlers wouldn't be able to filter by URL because the actual handler that responds to the HTTP request doesn't ever see the canonical URL of the poison fountain.

In other words, the handler is "self hosted" at its own URL while the stream itself comes from the same URL that the crawler never sees.

[–] GamingChairModel@lemmy.world 3 points 4 hours ago

In terms of usage of AI, I'm thinking "doing something a million people already know how to do" is probably on more secure footing than trying to go out and pioneer something new. When you're in the realm of copying and maybe remixing things for which there are lots of examples and lots of documentation (presumably in the training data), I'd bet large language models stay within a normal framework.

[–] GamingChairModel@lemmy.world 20 points 5 days ago (1 children)

The hot concept around the late 2000's and early 2010's was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.

Monetizing that goodwill didn't just ruin the look and feel of the sites: it permanently altered people's willingness to participate in those communities. Some, of course, don't mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.

Most Android phones with always on have a grayscale screen that is mostly black. But iPhones introduced always on with 1Hz screens and still show a less saturated, less bright version of the color wallpaper on the lock screen.

Joke's on him, I'm putting my website at 305.domain.tld.

It's actually pretty funny to think about other AI scrapers ingesting this nonsense into the training data for future models, too, where the last line isn't enough to get the model to discard the earlier false text.

[–] GamingChairModel@lemmy.world 1 points 5 days ago (2 children)

On phones and tablets, variable refresh rates make an "always on" display feasible in terms of battery budget, where you can have something like a lock screen turned on at all times without burning through too much power.

On laptops, this might open up some possibilities of the lock screen or some kind of static or slideshow screensaver staying on longer while idle, before turning off the display.

While we're at it, I never understood why the convention for domain name wasn't left to right tld, domain, subdomain. Most significant on left is how we do almost everything else, including numbers and ISO 8601 dates.

[–] GamingChairModel@lemmy.world 1 points 6 days ago (1 children)

It’s a fancy marketing term for when AI confidently does something in error.

How can the AI be confident?

We anthropomorphize the behaviors of these technologies to analogize their outputs to other phenomena observed in humans. In many cases, the analogy helps people decide how to respond to the technology itself, and that class of error.

Describing things in terms of "hallucinations" tell users that the output shouldn't always be trusted, regardless of how "confident" the technology seems.

 

Curious what everyone else is doing with all the files that are generated by photography as a hobby/interest/profession. What's your working setup, how do you share with others, and how are you backing things up?

view more: next ›