technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
you just haven't invested enough bro come on the singularity is right around the corner do another venture capital surge i promise it will be worth it
Yeah, the singularity of the economic black hole lmao
A mere 12 percent of CEOs reported that it’d accomplished both goals.
That 12 percent is either full of shit, they were running things like garbage to begin with, or the shitstorm just hasn't hit them yet.
I actually think around 10% success rate sounds about right here. There are niches where this tech works well, but it's being applied everywhere indiscriminately. So it makes sense that most deployments fail, but a small percentage actually finds the right niche.
This reminds me a lot of the dotcom bubble in that everyone was trying to make online businesses, even in industries where it made no sense. Online retail and other stuff was actually useful, but 99% of those businesses were graft that had no actual use, just an excuse to grab VC funding and run.
People are putting chatbots and llms e everywhere, even when they're unnecessary or even dangerous to implement.
Edit: just saw your comment further down. You put it way better than I could.
Yup, this is exactly like the dotCom bubble, except on an even bigger scale with a lot more shady business practices if that's even possible.
I'd say it's more likely that 95% were greedily chasing hype, went a little too all-in, and 7% of them got lucky it didn't burn them.
Dont worry its all "speculated ROI" now and when shit goes up in flames give the ceos 10 billion dollars while firing everyone else.
That was supposed to be what AI was for in the first place, an excuse to cut staff, smh damn capitalists don't even know their own scams anymore
Just a few more trillion and all the water in Tennessee and Minnesota and half the juice in the grid I swear bro
CEO: "Wow, this could replace me, because all I really do is send an email once a day and try to say nice things about business business while trying to profit off of insider trading. And this thing won't do the last bit, so it's better then me! Everyone use the theft machine!"
Workers: "Healthcare?"
CEO: "No. Only use theft machine!"
This sounds damning but also doesn't mean a lot. Many companies go many many years building things before seeing a "return" on it. They make money but they are making less than they're burning in VC funds, and as they start approaching the break even point, they use that to go get more VC funds to burn to keep expanding. This is largely how the tech industry works. Very very few companies are cash flow positive during their growth phases.
It does mean that there is a risk in this investment because from a business perspective AI hasn't been proven as a valuable investment yet but it still might for at least some of these companies and unless we really do run into the physics issues with AI with regards to data center capacity and build rate it could take a decade or more for this "we aren't seeing a return yet" thing to matter
I agree, it's basically a completely new tool looking for a market fit. It's also worth noting that these companies are basically looking for one stellar application. If they hit on something that works really well, that's gonna be the business model. So, they're perfectly fine with most of the pilots failing if they can find one that works well.
That said, I do think there is a bubble where a lot of companies are implementing these tools without having a good fit for them, and there's a ton of money being wasted in the process. It's kind of the same thing we saw with the dotCom bubble. When it popped, there was an extinction event where most companies went belly up, but we got a ton of useful tech out of it that underpins the internet today.
I expect we'll see a similar thing happen with AI. Except, this time around there's another factor which is that there's direct competition from China. My prediction is that Chinese models will win in the end because Chinese companies aren't looking for direct monetization, they're treating models as infrastructure, sort of what we see with Linux. Most companies don't try to monetize it directly, they build stuff like AWS on top of it and that becomes the product.
I expect American companies are just going to run out of runway in the near future, and they're also getting squeezed by cheap Chinese models that are also open source. Big companies prefer running stuff on prem because they can keep their data private that way, and they can tune the models any way they want. Meanwhile, stuff like DeepSeek is orders of magnitude cheaper than Claude for individual use. So I just don't see a long term business model for models as a service, especially not at pricing like Anthropic or even Google. Vast majority of people aren't gonna pay 20 bucks a month for this stuff, let alone a 100.
Most companies don't try to monetize it directly, they build stuff like AWS on top of it and that becomes the product.
My company has an idea like this going rn but still charges per token bc it uses the vc funded companies services instead of having their own models
For business customers per token costs might not be a deal breaker, but for anything consumer facing it's a really tough sell in my opinion. I do expect that the cost of running models is going to come down significantly in the near future though. There is a whole bunch of recent research that identify some key optimizations that can be made. Some of the ones I've found particularly interesting here:
- Deepseek-mhc https://arxiv.org/abs/2512.24880
- a graph-guided code generation framework that could solve the problem of keeping a repo’s structure in context https://arxiv.org/abs/2509.16198
- Hierarchical Reasoning Model https://arxiv.org/abs/2506.21734
- MemOS https://arxiv.org/abs/2507.03724 (https://github.com/MemTensor/MemOS)
- https://github.com/BICLab/SpikingBrain-7B
- German researchers achieved 71.6% on ARC-AGI using a regular GPU for 2 cents per task https://arxiv.org/abs/2505.07859
- Affordable AI assistants with knowledge graph https://arxiv.org/abs/2504.02670 (https://github.com/spcl/knowledge-graph-of-thoughts)
- Nested Learning: A new ML paradigm for continual learning https://abehrouz.github.io/files/NL.pdf
- Continual Low-Rank Adaptation for Pre-trained Models https://arxiv.org/abs/2502.17920
- Recursive language models https://arxiv.org/pdf/2512.24601
- embeddings and context https://pub.sakana.ai/DroPE/
- DeepSeek Engram paper https://github.com/deepseek-ai/Engram/blob/main/Engram_paper.pdf
- a 32M parameter multi vector model outperforms 600M parameter models https://arxiv.org/abs/2601.08620
Once these ideas start getting integrated, I expect that we'll see much more capable models that can run on fairly cheap hardware. Even local models will likely be quite capable for a lot of tasks. And at that point running a model as a service and charging per token is going to be a dead end.
this is an incredible list of research. TYSM! In spare work time i have a small tool that tries to accomplish what #2 describes i have not clicked the link and read yet but now i will read everything
I played around with implementing the recursive language model paper, and that actually turned out pretty well https://git.sr.ht/~yogthos/matryoshka
Basically, I spin up a js repl in a sandbox, and the agent can feed files into it, and then run commands against them. What normally happens is that the agent has to ingest the whole file into its context, but now it can just shove files into the repl, and then do operations on them akin to a db. And it can create variables. For example, if it searches for something in a file, it can bind the result to a variable and keep track of it. If it needs to filter the search later, it can just reference the variable it already made. This saves a huge amount of token use, and also helps the model stay more focused.
about how large are the codebases you’ve used this rlm with
Around around 10k lines or so. I use it as MCP that the agent uses when it decides it needs to. The whole code base doesn't get loaded in the repl, just individual files as it searches through them.
But it isn't a niche area, it's the major focus of corporate investment for the last year or two.
who could've seen this coming

And we’re going to pay for it, either ways
Lol for real. im soending today undoing work my ai slop loving colleague wrote on Friday. so effectively zeroed out.
there were some pictures floating on twitter (in tooze circles, can't find it quickly), that there were some increases in total factor productivity for tech workers and some text adjacent fields, around 3%, with some negative correlations for teaching and i think doctors/nurses? which seems kinda less than i expected
How are they measuring productivity? If one of your KPIs is "Accelerate Business Objectives By Leveraging Theft Machine Synergy", the results may be a bit skewed. 😄
the most weird is construction, i don't know what to make of it
Construction usually involves a lot of repetitive, very detail-oriented tasks, like invoicing, writing buy orders, signing off on deliveries, doing payroll, scheduling shifts, project management. A very limited scope LLM can make these tasks much easier I imagine.
HUH, WHO WOULDA THOUGHT
Also, "seeing a financial return" is not equivalent to "using AI was a good idea". There are multiple companies I've interacted with that e.g. use AI as their support tool and cut their 'real' support team hugely. And it's so furiously useless that I quickly hate the company because it can't support shit.
A lot of these companies who are seeing a financial return today, will quickly see that go away as their business stagnates/shrinks from offering a shittier service. And a lot of those that don't will be companies that can just get away with infinite enshittification and externalising their costs (eg natural monopolies, stuff people need to live, etc), thus making all of life worse even if their line goes up.
From what I've heard is that this is the same ol story as with any other tech bandwagon for the past 15 years, be it block chain, big data, predictive algorithms, etc. I saw this cycle first hand when I was working on tech projects:
Company execs see at some conference that this is the new big thing and that their business is going to fail if they get in on the ground floor. Then some sales reps from a big corporate tech company (SAP, Oracle, Google, MS) talks them into spending huge amounts of money on all the bells and whistles before even figuring out how to implement the tech in their company. Finally, the consultants come and in the best case scenario, there's a very specific use case for the tech within the company, and it saves/makes quite a bit of money, in the worst case, it's a botched implementation that causes more pain than it solves. Either way, the modest pay-offs are not nearly enough to pay off everything that was spent on the initial investment. Rinse and repeat.
These ghouls are practically giddy at the prospect of firing as many workers as possible. Sorry to disappoint them.


