Technology

1420 readers
35 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
151
 
 

This paper is honestly one of the most creative takes on LLM reasoning I’ve seen in a while. The team at ByteDance basically argues that we should view Long Chain-of-Thought as a macromolecular structure with internal forces that hold the logic together. They found that when we try to teach a model to reason by simply distilling keywords from a teacher, it fails because it’s like trying to build a protein by looking at a photo of it rather than understanding the atomic bonds.

Their Molecular Structure of Thought hypothesis breaks reasoning down into three specific bond types that behave similarly to their chemical counterparts. Deep reasoning acts like covalent bonds, forming the rigid primary backbone where each logical step must strictly justify the next. Self-reflection functions like hydrogen bonds, creating folding patterns where the model looks back 100 steps to audit an earlier premise, which keeps it from hallucinating. Finally, you have self-exploration acting like van der Waals forces, these are low-commitment bridges that let the model probe different ideas without getting stuck in a rigid path too early.

They found that most synthetic reasoning data is actually trash because it lacks this distribution. They proved that models don't actually learn the keywords themselves, but the characteristic reasoning behaviors those keywords represent. In one experiment, they replaced keywords like wait with arbitrary synonyms or removed them entirely, and the models still learned the reasoning structure just fine. It turns out that building these stable thought molecules is what creates the basis for Long CoT, as opposed to just mimicking a specific vibe or prompt format.

They built MOLE-SYN to address the problem. Instead of just copying teacher outputs, it uses a distribution transfer graph to walk through four behavioral states to synthesize traces that have the correct bond profile from the start. Their approach makes reinforcement learning much more stable because the model starts with a balanced skeleton instead of a bunch of fragmented logic. The paper challenges the whole more data is better mindset to argue that it's the geometry of the information flow that really matters.

152
153
 
 

A blog post I found in response to Cory Doctorow taking a pro-LLM stance in a recent post of his.

154
155
156
157
158
159
160
161
 
 

I have been developing a website that links to a wide array of Marxist media in various formats. I want Marxists, new and old, to be able to go to the "Organization" section, for instance, and have instant access to books, articles, YouTube videos and so on about the topic. Now, I want your input! Particularly, I have 3 critical questions for you please.

1. Does this already exist? Firstly, I want to double check that no project like this has already been made. As far as I can tell, other websites host their own content, host a small niche selection of content, or were never developed. If you know of any, please let me know so I can direct my efforts elsewhere. 2. What do you want out of a user interface? I don't want to scare off new leftists with the dry web design that Marxists are used to searching through. My solution was a UI that resembles a movie streaming service, but I then risk compromising detail and functionality. Which layouts and features would best suit your needs? 3. How should I approach sourcing the content? Currently, I have added the essential works we all know of and am manually finding relevant articles/videos. In the future, I would hope that anyone can contribute material. I think here in Technology you all share my love of open source. Should I most simply host the site from a GitHub repository? Is Microsoft's GitHub a viable location for a Marxist site? Your input is warmly welcome to me.

162
 
 

@technology Electric Vehicle Sales Boom as Ethiopia Bans Fossil-Fuel Car Imports

Chinese EV's are pulling Africans out of poverty

https://archive.is/qsHIw

163
 
 

Palliser Capital recently sent a letter to Toto, the $7 billion Japanese toilet maker. They called the company "the most undervalued and overlooked AI memory beneficiary." That might seem strange at first, but the connection is in materials science.

Toto is famous for its bidet toilets, but its deep expertise is in advanced ceramics. According to the FT, Toto's chuck technology uses ceramics engineered to remain perfectly stable at extremely low temperatures. This turns out to be really handy for holding silicon wafers firmly in place during cryogenic etching, which is becoming more important as memory chips get more layered and complex. Palliser believes Toto has about a five-year lead in this specific technology and should expand this side of its business.

Their advanced ceramics division already contributes 40% of the company's operating profit, despite making up less than 10% of its revenue.

Toto isn't even the most extreme example. Another company called Ajinomoto is known MSG, leveraged decades of amino acid research to development insulating film, called Ajinomoto Build-up Film (ABF), that is used in virtually every high-end GPU. They hold an estimated 95% global monopoly on this material. During the 2021 chip shortage, a major bottleneck was the supply of Ajinomoto's film.

It turns out that Japanese companies hold a majority global share in at least 14 critical semiconductor materials, showing how industrial processes are deeply connected. The sintering technique used to create a non-porous ceramic toilet is the same one used to create a contamination-free wafer chuck. The most foundational layer of computing hardware relies on companies whose public identity is built on consumer goods like toilets, food seasoning, and window glass. It's a good reminder that physical material science underpins digital advancement.

https://archive.ph/LhvNo

164
165
166
167
168
169
170
171
172
173
174
175
view more: ‹ prev next ›