AtmosphericRiversCuomo

joined 1 year ago
[–] AtmosphericRiversCuomo@hexbear.net 1 points 5 days ago (1 children)

People don't like shows like that because of the class composition of the characters. They like shit that makes them laugh.

Arrested Development clowned on out of touch rich people, but that's secondary to it just being funny as hell.

MITM at best made people nose laugh. It had some great performances, but they were shoved into a bland package.

[–] AtmosphericRiversCuomo@hexbear.net 6 points 1 week ago (6 children)

The first one wasn't even good. What's next a reboot of the Goldbergs?

They're not paying for 20 votes, they're "paying" 20 people to work for them.

"But these models can't even think!!", I reassure myself as my job that never actually required much thought to begin with is automated away by some shitty AI agent deemed "good enough" by management.

This was an okay interview honestly. Felt like socdem slop. She drops "inshallah" unironically at one point, idk if she's one of us or speaks Arabic or what.

This guy made his bag early on and is now advocating against the current prevailing AI attitudes towards from freaks like Musk and Thiel. Dude's a pro-AI lib, but this interview had some good insights and the interviewer asked good questions about fascism, environmental impacts, etc.

One insight was that everyone in silicon valley is going all in trying to build an AI God, rather than focus on how the tech we already have could improve people's lives in areas like medicine. Nothing new to socialists but it was refreshing to hear and I hope hits for some people.

[–] AtmosphericRiversCuomo@hexbear.net 9 points 1 month ago (1 children)

Please do it in the form of a trilogy of shitty novels.

[–] AtmosphericRiversCuomo@hexbear.net 30 points 1 month ago (1 children)

If you can still connect the dots enough to admit it's from covid, then you're doing better than 99% of us.

Sigh. This is exactly why the Federation banned "little guy" posting in 2157.

[–] AtmosphericRiversCuomo@hexbear.net 18 points 1 month ago (1 children)

“Not only does the enemy make you ignorant…he makes you want to love ignorance and hate knowledge.”

[–] AtmosphericRiversCuomo@hexbear.net 15 points 1 month ago* (last edited 1 month ago)

This, but for Unitree robotics.

You're right of course, but wasting your breath here because people have a hard time separating prediction from praise. They don't want to win, only whine.

It's what I use on my boomers.

 

"Girl, you better hold on tight to your man, because I'm known for separating families, okkaaaaaayyy?"

 

We know China is notoriously hard to immigrate to, but what about other options? Mongolia? Singapore? Laos? Vietnam?

Basically looking to discuss countries that are easier to get into, but might still benefit from being in the rising east rather than the collapsing west.

 

I found a YouTube link in your post.

 

It feels like the US isn’t releasing what it has. I don’t think they’re behind, maybe just holding back?

i-cant

 

Test-time training (TTT) significantly enhances language models' abstract reasoning, improving accuracy up to 6x on the Abstraction and Reasoning Corpus (ARC). Key factors for successful TTT include initial fine-tuning, auxiliary tasks, and per-instance training. Applying TTT to an 8B-parameter model boosts accuracy to 53% on ARC's public validation set, nearly 25% better than previous public, neural approaches. Ensemble with recent program generation methods achieves 61.9% accuracy, matching average human scores. This suggests that, in addition to explicit symbolic search, test-time training on few-shot examples significantly improves abstract reasoning in neural language models.

 

Unlike traditional language models that only learn from textual data, ESM3 learns from discrete tokens representing the sequence, three-dimensional structure, and biological function of proteins. The model views proteins as existing in an organized space where each protein is adjacent to every other protein that differs by a single mutation event.

They used it to "evolve" a novel protein that acts similarly to others found in nature, while being structurally unique.

 

They fine-tuned a Llama 13B LLM with military specific data, and claim it works as well as GPT-4 for those tasks.

Not sure why they wouldn't use a more capable model like 405B though.

Something about this smells to me. Maybe a way to stimulate defense spending around AI?

 

...versatile technique that combines a huge amount of heterogeneous data from many of sources into one system that can teach any robot a wide range of tasks

This method could be faster and less expensive than traditional techniques because it requires far fewer task-specific data. In addition, it outperformed training from scratch by more than 20 percent in simulation and real-world experiments.

Paper: https://arxiv.org/pdf/2409.20537

 

With the stated goal of "liberating people from repetitive labor and high-risk industries, and improving productivity levels and work efficiency"

Hopefully they can pull it off cheaply while Tesla's Optimus remains vaporware (or whatever the real world equivalent of vaporware is).

view more: next ›