this post was submitted on 28 Jan 2026
11 points (92.3% liked)

Technology

1363 readers
41 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] o_d@lemmygrad.ml 5 points 1 week ago (1 children)

Comparing ARR of US and Chinese companies is comparing apples with big macs. The author completely ignores that the major consumers paying for AI in the west are governments and other big tech companies, not its end users. This is true for Chinese companies as well, as is mentioned in the article. The large disparity in revenue however, highlights the fact that US companies operate as a cartel whereas Chinese companies do not.

What are you thoughts on the author's conclusions? Nothing in the article seems to back-up the assumptions that Chinese companies will be more focused on the international market than the domestic one, or that they will be more focused on dev tools, support and chatbots, etc. in the near future.

[–] yogthos@lemmygrad.ml 4 points 1 week ago (1 children)

I expect that Chinese companies will focus both on domestic and international markets. Releasing models in the open is a clear appeal to international markets, and it's a way to ensure that Chinese models become the global standard setters going forward. I haven't really seen any indication to suggest which markets companies are favoring, but there are big pushes in places like Africa right now. So, global market is seen as being important at the very least.

In terms of focus, it seems like it's going to be on practical applications, and integration of AI tools into existing products as we see Alibaba doing with Qwen. There does seem to be a steady push in terms of dev tools, Kimi 2.5 is a good example of that. IQuest coder is another recent innovative model that's been trained on git histories for projects to understand how code evolves. Strong coding models is a really important application for LLMs and I think there's going to be a big push in that area in the coming years.

[–] o_d@lemmygrad.ml 3 points 1 week ago (2 children)

[...] there are big pushes in places like Africa right now.

Oh that's interesting. What areas does this push in Africa focus on?

As for dev tools, I think the verdict is still out on whether AI is actually beneficial. Being able to skip the tediousness of typing out some boilerplate can be nice, but the push to use these tools in the west is all encompassing. I've seen a lot of studies that suggests these tools are actually reducing productivity, all while lowering peoples job satisfaction. I've also noticed an increased lack of ability to read and reason about code in less experienced devs. This is a much more problematic bottleneck than the speed of writing code.

I'm not familiar with how these tools are used in China. I have to assume they take a much more pragmatic approach. The goal in the west is certainly to get these models to a place where a junior dev plus an AI tool can replace a senior dev, reducing labour costs. The reduced productivity is seen as a temporary cost to achieve this ultimate goal.

[–] yogthos@lemmygrad.ml 4 points 1 week ago (1 children)

Bloomberg actually had a good article on this if you ignore the biases. Basically, DeepSeek and other Chinese companies are starting to get used by African companies to build stuff on top of. And since API costs are a fraction of what US companies charge, they're far more popular. On top of that, some companies in Africa are starting to run self hosted models too.

I've been using AI coding tools for the past half a year and I can definitively tell you that they are beneficial. There's no question about that. What we see is that people are still learning to use these tools effectively, and that they're not magic. But once you spend enough time with them, they really do work well.

I've seen the studies you refer to and the problem is that they often have low sample sizes, and that a lot of experienced devs don't really want to use these tools. So, when you have a situation of people who are already biased against this tech, they're obviously not going to be effective at using it. However, that doesn't mean this tech isn't effective.

Personally, I haven't seen much evidence to support the arguments that the code is harder to read and reason about either. The code LLMs produce is often a lot better than the code I've seen written by hand.

All that said, these tools absolutely cannot replace senior devs in their current state. The one thing I've consistently noticed is that you have to be very specific with instructing the model on how the problem should be solved. If you just tell it to do something in a general way, it will almost certainly produce a poor solution. However, if you tell it the steps you want, then it can follow instruction well. And developing the intuition on how to approach a particular type of problem is precisely what makes an experienced dev. A junior who doesn't have enough experience to understand what the correct solution should look like will not be able to instruct the LLM on what needs to be done.

[–] o_d@lemmygrad.ml 1 points 6 days ago (1 children)

Thanks for sharing! It's easy to take for granted that so much information in the west is easily accessible via the internet. This not being the case in Africa is a really interesting point covered in the article. They need to do a lot of the training themselves, and this is after hiring labour to digitize the data in the first place. Having open source models is so critical to keeping costs down for this to be feasible.

I've been taking a break from the tech industry for the last half year for mental health reasons. This was right around the time where agents became available so my experience with AI as a dev tool is mostly without them. At that time, I generally found it to be more effective and rewarding to spend the time coming up with a solution on my own, rather than trying to find the right prompts to get the resulting code to an acceptable place.

In my experience, productivity of writing code was never a big cost. Especially after teaching myself to touch type correctly and learning to navigate VIM/NeoVIM. The vast majority of my time is spent understanding the problem as deeply as possible and coming up with a solution that takes into account the context and can be easily maintained, enhanced, and scaled. The RFC/TDD phase and testing/iterating.

I don't think this is unique to myself. I've heard and read a lot of the same from other senior devs. Would you agree with this? If so, how has AI been beneficial more specifically in your experience?

Also, just want to add that I agree that a junior with a LLM cannot take the place of a senior at this time. But I do think that's the ultimate goal of the tech oligarchs. I have no idea if this is truly achievable with LLMs since there's still a lot of problems that need to be solved, especially around iterating and enhancing existing products. Progress is being made however, so it could be just a matter of time.

[–] yogthos@lemmygrad.ml 4 points 6 days ago (1 children)

Coding with agents is a completely different experience. You can kind of think of it as doing a pair programming session. Modern agents, are quite good at understanding the project, finding things in it, making changes reliably, and I find it saves an incredible amount of time. Tasks that used to take hours take minutes now. The key difference is that the agent actually sees your specific project, and you can basically get it to do TDD where you set up tests that it has to pass so it can't cheat.

The huge time saving comes from finding the spots that need to change, which is traditionally a pain in a large codebse, figuring out stuff like API calls, library usage, etc. It's a lot better at tracking variables across the code too. A lot of time it would find and clean up things I would've missed. You used to have to look this stuff up constantly, and now the agent just does it for you. It's particularly great for working with stuff like web services where it can figure out API calls from the code. Previously, I'd have to set up and run a bunch of services, try calling the APIs, inspect the payloads. Now the agent can figure this stuff out all by itself just by doing code analysis.

Being able to get something working quickly also means your iteration cycle is a lot faster. You can try different things, and just throw them out if they don't work. Something that might've taken a week of time to try out before can be done in a few hours now. I find this directly helps with understanding the problem too, because as you try different things you learn the problem space and get a better idea of what the right solution is.

It also lets you work with languages you don't have a lot of experience with. I'm currently working on a React/Js project, and I haven't touched either before. I'm able to use my knowledge of how to structure apps, and solve problems and then have the agent deal with the implementation details and all the quirks of Js syntax. I simply would not be able to take on a project of this scope before.

I think the goal to replace seniors might be real, but it's clearly not achievable. Ultimately, LLMs don't have understanding in a human sense. They don't have an internal model of the world like we do to base decisions on. These are just statistical inference engines. Ultimately, the human had to understand the problem being solved and what the correct way to solve it is. The agent can help you find the solution, but you have to be able to evaluate it.

The more realistic scenario going forward is that it's the juniors who end up being replaced with companies keeping a handful of senior devs wrangling agents handling the implementation details. Of course, that's obviously going to backfire in the long run as there's going to be a gap in the talent pool.

I imagine that eventually people will figure out that these tools aren't magic, and how to effectively integrate them into development workflows. Any new tech like this ends up being disruptive in the short term, and the hype results in stupid decisions being made. Over time the hype dies down, and people figure out how to use new technology in a sane way.

[–] o_d@lemmygrad.ml 2 points 5 days ago (1 children)

Thanks for taking the time to share your perspective! I'm actually curious about working with agents myself now.

[–] yogthos@lemmygrad.ml 2 points 5 days ago (1 children)

No problem, and it's definitely worth trying out. I can recommend crush which is a pretty decent tool supporting a bunch of LLM providers as well as local models. I'd start with DeepSeek because their API costs are really cheap and it's fairly competent at a lot of tasks.

[–] Comprehensive49@lemmygrad.ml 1 points 13 hours ago (1 children)

Why so you prefer Crush over OpenCode, Claude Code, or Cursor CLI? I'm exploring this space myself as well.

[–] yogthos@lemmygrad.ml 1 points 11 hours ago

I find they're all fairly similar in practice. What matters most is the model capability. The workflow is really similar across different tools from what I've seen.

[–] Comprehensive49@lemmygrad.ml 4 points 1 week ago* (last edited 1 week ago) (1 children)

AI for coding is very useful, assuming you're already a senior dev. It allows one senior dev to replace four or five junior devs, ultimately annihilating the market for junior devs. Unlike junior devs, senior devs already know how code should run, so verifying the AI isn't an issue. Instead of senior devs having to "waste time and productivity" training junior devs, senior devs can instead spin up four or five AI agents and supervise them, commanding them to write the code and then review the code at the end.

Now, the question arises, "what will happen when these senior devs retire?" There is no answer to that. The idea is by that time, all dev work will be replaced by AI.

[–] o_d@lemmygrad.ml 3 points 1 week ago (1 children)

This logic doesn't really track in my experience. Hiring junior devs was never about writing production code. It's an investment with the understanding that a junior will develop into an intermediate and senior in the future. It generally takes more time to mentor a junior and review their code than to simply code it myself. In addition, doing it myself will certainly result in higher quality, scalable, maintainable code.

This is the contradiction. Many seniors are seeing their productivity decline, because being forced to use AI is like mentoring 1 or 2 or 3 or 4 etc. juniors. The AI doesn't improve based on your individual mentorship making this a different type of "investment", if you can even call it that. This isn't about productivity or investing in labour. It's about creating a system by which the value of your labour power can be reduced.

[–] Comprehensive49@lemmygrad.ml 2 points 14 hours ago (1 children)

My point is that companies hire junior devs in order to train them into senior devs to get more in-house programming ability. Now they can just have their senior devs spin up one more AI agent, so have much less incentive to ever hire juniors.

The newer AI coding tools (Cursor, Claude Code, new new new OpenAI Codex) are much more powerful than even just a year ago, and legitimately feel like force multipliers for senior devs. ( https://www.youtube.com/watch?v=Z9UxjmNF7b0 ) Unfortunately, junior devs are too junior to know how to use them to the full extent, and soon will have no job opportunities to ever learn how to become a senior dev.

The programming job market reflects this fact, and has crashed pretty hard for juniors.

[–] TankieReplyBot@lemmygrad.ml 1 points 14 hours ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy: