this post was submitted on 20 Mar 2026
121 points (99.2% liked)
Fuck AI
6452 readers
1124 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The same way.
The result the LLM produces is a link to the relevant information directly I can click and go to it.
Example would be a giant collection of files, think like, 10gb+ of many pdfs, or more. I want any relevant sections on a topic, it quickly can aggregate on it and give me links I click to open up straight to relevant sections in specific files, and then read rhem.
This stuff comes up a lot in my industry (software dev) as we often inherit huge data pools of legacy documentation on massive codebases.
When I am tasked with fixing 1 small specific piece of the system, it could take me hours to find the specific stuff Im looking for on the (often poorly maintained) docs.
But also vector db setup to map to that data, and an LLM wired up to it, can search it in milliseconds and pull up relevant sections asap, and I can click and dig deeper from there as much as I need.
This sort of "fuzzy searching" vectorization of tokens is what an LLM does very well. Its part of how it produces its output, but you can reverse the process to create search indexes (effectively reversing the data through the LLM to turn the data into deterministic vectors)
Amd its important to note, the fault tolerance you perceive here doesnt apply. Thos specific type of searching with vector DBs will always produce the same results from the same input, everytime. Its deterministic.