this post was submitted on 19 Jun 2025
111 points (100.0% liked)

TechTakes

1977 readers
224 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] HedyL@awful.systems 9 points 10 hours ago

Google has a market cap of about 2.1 trillion dollars. Therefore the stock price only has to go up by about 0,00007 percent following the iNaturalist announcement for this "investment" to pay off. Of course, this is just a back-of-the-envelope calculation, but maybe popular charities should keep this in mind before accepting money in a context like this.

[–] otter@lemmy.ca 22 points 20 hours ago* (last edited 20 hours ago) (2 children)

I went through some of the links from the article, and there was an update pinned in one of them:

https://forum.inaturalist.org/t/what-is-this-inaturalist-and-generative-ai/66140/431

@procyonloiter and I just had a 3-hour in-person talk with @loarie and I am delighted to say that it has completely alleviated my concerns around this entire issue. I went into it seriously contemplating deleting my entire account and many years of work, and I have come out of it feeling like a massive weight has been lifted.

This whole thing has just been very poor messaging and some serious miscommunication, and DOES NOT indicate any actual shift in how iNat is planning to operate.

  • The lack of communication updates has been because everyone on staff is freaked out and overwhelmed by the amount of backlash, and in a bit of paralysis about how to appropriately respond. There is no nefarious reason for it.
  • The grant from google is indeed a grant, and they are not receiving any data or anything else in exchange for it (I’m sure they’re scraping and stealing stuff anyway but that’s true for anything posted online)
  • The “generative AI” mention in the grant is badly worded corporate buzzspeak, and doesn’t accurately reflect anything that will be used here - disregard any association to what you normally expect from those words
  • The vast majority of the funds will be used to cover normal operating costs of what iNat does every day. A small amount will be going to some specific grant-related projects, which, again, are not actually genAI. There is no guarantee these things will even be implemented on iNat in the end - if they suck, they’ll be tossed out.
  • The staff are very receptive to user concerns, and there will be a chance for people to speak to them and ask specific questions - since it’s a friday and everyone is in different time zones, the details haven’t been fully organized yet - I suggested maybe a drop-in zoom call, or something, where people can join and leave throughout a set time period, so it’s not overwhelmed by a ton of people all competing for talking space at once.

That doesn’t completely cover everything we spoke about, but I’m going to post this now to get it up in the thread- please feel free to ask any questions I might be able to answer!

[–] milicent_bystandr@lemm.ee 41 points 19 hours ago (1 children)

The “generative AI” mention in the grant is badly worded corporate buzzspeak, and doesn’t accurately reflect anything that will be used here - disregard any association to what you normally expect from those words

That sounds particularly suspect, coming with no answer as to what it does mean.

[–] otter@lemmy.ca 12 points 19 hours ago

I agree, and it's also a second hand account from someone who met with them.

However it IS enough for me to hold off on deleting anything until I can hear more. My big concern was this point:

The grant from google is indeed a grant, and they are not receiving any data or anything else in exchange for it (I’m sure they’re scraping and stealing stuff anyway but that’s true for anything posted online)

[–] dgerard@awful.systems 14 points 19 hours ago (1 children)

yeah I suggest you keep reading the thread pointing out how they were explicitly talking about generative AI a year before

[–] acockworkorange@mander.xyz 13 points 17 hours ago* (last edited 17 hours ago) (1 children)

This is the bit:

seblivia

The “generative AI” mention in the grant is badly worded corporate buzzspeak, and doesn’t accurately reflect anything that will be used here - disregard any association to what you normally expect from those words

In the blog post, they described a specific feature they wanted to develop, and linked to a blog post from last year that said they wanted to use a Vision Language Model, which is essentially an LLM with some visual processing stuff attached. This isn’t badly worded corporate buzzspeak, they very clearly gave an example of what they wanted to do and have had a plan in place for at least a year now that involves using generative AI.

image

Personally, the contradictions between what was said in the blog posts, both a year ago and a few days ago, and what has been said on the forums since then are making it hard to feel like I can trust anything the staff now say about this project. It feels like they’re either wildly backpedalling or have no idea what they’re talking about when it comes to AI, and if it’s the former I’d much prefer for them to just say “we’ve listened to the community’s responses and have decided to pivot towards developing something more like this instead of the original plan to use genAI”.

Maybe I’m just incredibly cynical, but I don’t see how saying you want to use a very specific kind of genAI and showcasing a mockup of the feature you want to implement and have apparently been planning for at least a year could be passed off as just “badly worded corporate buzzspeak”

[–] flora_explora@beehaw.org -3 points 10 hours ago (1 children)

Well sure, I also felt annoyed when they first announced generative AI. But I'm pretty confident in that iNat staff isn't doing anything malicious and this example shows how they were thinking of an actual way to use generative AI in a productive way. Will this feature ever make it? Probably not, because all the testing to get it to sufficient accuracy would be enormous.

Another example of generative AI (if I'm not mistaken) would be the feature they are testing, where you can type in something to search for taxa.

Both times they use generative AI as a specific tool for a specific task and in both cases I'm confident in that they will be checking for a certain accuracy. The iNat staff is very much connected with the naturalist users and are really motivated to make iNat better (just look through the forum).

On the other hand, considering deleting your iNat account just because they mention generative AI seems like being caught in the AI hype train as well, but just on the other side of it. Not all generative AI has to be bad, as long as it is used as a specific tool for a specific problem and in consideration of its limitations.

[–] ebu@awful.systems 7 points 6 hours ago (1 children)
  1. no one is assuming iNaturalist is being malicious, saying otherwise is just well-poisoning.
  2. there is no amount of testing that can ever overcome the inherently-stochastic output of LLMs. the "best-case" scenario is text-shaped slop that is more convincing, but not any more correct, which is an anti-goal for iNaturalist as a whole
  3. we've already had computer vision for ages. we've had google images for twenty years. there is absolutely no reason to bolt a slop generator of any kind to a search engine.
  4. "staff is very much connected with users" obviously should come with some asterisks given the massive disconnect between staff and users on their use and endorsement of spicy autocorrect
  5. framing users who delete their accounts in protest of machine slop being put up on iNaturalist, which is actually the point of contention here, as being over-reactive to the mere mention of AI, and thus being basically the same as the AI boosters? well, it's gross. iNat et. al. explicitly signaled that they were going to inject AI garbage into their site. users who didn't like that voted with their accounts and left. you don't get to post-hoc ascribe them a strawman rationale and declare them basically the same as the promptfans, fuck off with that
[–] flora_explora@beehaw.org -1 points 2 hours ago (1 children)

First of all, sorry if my comment sounded like I'm dismissing the position of other people commenting before me. I tried exploring the other side of the argument and am genuinely open to any outcome here.

  1. Hm yes, maybe not malicious, but the quoted portion from the iNat forum sounded very much like the person commenting described the iNat stuff as untrustworthy.

  2. I'm probably not knowledgeable enough to really have an opinion. I'd have thought that there are some use cases where generative AI can be helpful. But you have a point in that iNat actually relies on correct and trustworthy results.

  3. yeah, I get that and I'm not in favor of it either. But it's probably also a cost-benefit calculation for iNat to get a grant from Google and having to work on some sort of generative AI.

  4. Sorry, I'm out of the loop. What are you referring to?

  5. OK fair, maybe that was a bit much, sorry. I think it is a huge step to delete your account and leave a community just based on the mention of generative AI and I have a hard time getting into the head space. Like, sure, if you invested little time in the site. But I've put thousands of hours in iNat and would certainly need a strong incentive to delete my account...

[–] ebu@awful.systems 2 points 1 hour ago

no worries -- i am in the unfortunate position of very often needing to assume the worst in others and maybe my reading of you was harsher than it should have been, and for that i am sorry. but...

"generative AI" is a bit of a marketing buzzword. the specific technology in play here is LLMs, and they should be forcefully kept out of every online system, especially ones people rely on for information.

LLMs are inherently unfit for every purpose. they might be "useful", in the sense that a rock is useful for driving a nail through a board, but they are not tools in the same way hammers are. the only exception to this is when you need a lot of text in a hurry and don't care about the quality or accuracy of the text -- in other words, spams and scams. in those specific domains i can admit LLMs are the most applicable tool for the job.

so when ostensibly-smart people, but especially ones who are running public information systems, propose using LLMs for things they are unable to do, such as explain species identification procedures, it means either 1) they've been suckered into believing they're capable of doing those things, or 2) they're being paid to propose those things. sometimes it is a mix of both. either way, it very much indicates those people should not be trusted.

furthermore, the technology industry as a whole has already spent several billion dollars trying to push this technology onto and into every part of our daily lives. LLM-infested slop has made its way onto every online platform, and more often than not, with direct backing from those platforms. and the technology industry is openly hostile to the idea of "consent", actively trying to undermine it at every turn. it's even made it all the way through to the statement attempting to reassure on that forum post about the mystery demo LLMs -- note the use of the phrase "making it opt-out". why not "opt-in"? why not "with consent"?

it's no wonder that people are leaving -- the writing is more or less on the wall.