AtmosphericRiversCuomo

joined 1 year ago

Have a Xanax on hand if you read this.

Basically saying "don't blame us when Kessler syndrome happens because of elon."

[–] AtmosphericRiversCuomo@hexbear.net 20 points 1 week ago (2 children)

So what I'm reading is that leftists could take and hold land in rural Canada without much provincial or federal pushback?

[–] AtmosphericRiversCuomo@hexbear.net 5 points 2 weeks ago (1 children)

Probably going to be great for shorter contexts. Longer remains to be seen.

Fomenting conflict between evangelicals and Mormons, as a bit.

[–] AtmosphericRiversCuomo@hexbear.net 7 points 2 weeks ago (4 children)

I have banned shidding. Pray I do not ban pissing.

Finally I feel seen.

Counterpoint: nah fuck that

Effective, if the goal is to demonstrate the deadend that is electoralism. Her platform is based af tho.

Didn't someone message a bunch of cth people on reddit at some point after the ban or did I make that up? It's been a long time.

 

"Girl, you better hold on tight to your man, because I'm known for separating families, okkaaaaaayyy?"

Drag queen trump: you better hold onto your man honey, because I'm known for separating families, okayyyy?

 

We know China is notoriously hard to immigrate to, but what about other options? Mongolia? Singapore? Laos? Vietnam?

Basically looking to discuss countries that are easier to get into, but might still benefit from being in the rising east rather than the collapsing west.

 

I found a YouTube link in your post.

 

It feels like the US isn’t releasing what it has. I don’t think they’re behind, maybe just holding back?

i-cant

 

Test-time training (TTT) significantly enhances language models' abstract reasoning, improving accuracy up to 6x on the Abstraction and Reasoning Corpus (ARC). Key factors for successful TTT include initial fine-tuning, auxiliary tasks, and per-instance training. Applying TTT to an 8B-parameter model boosts accuracy to 53% on ARC's public validation set, nearly 25% better than previous public, neural approaches. Ensemble with recent program generation methods achieves 61.9% accuracy, matching average human scores. This suggests that, in addition to explicit symbolic search, test-time training on few-shot examples significantly improves abstract reasoning in neural language models.

 

Unlike traditional language models that only learn from textual data, ESM3 learns from discrete tokens representing the sequence, three-dimensional structure, and biological function of proteins. The model views proteins as existing in an organized space where each protein is adjacent to every other protein that differs by a single mutation event.

They used it to "evolve" a novel protein that acts similarly to others found in nature, while being structurally unique.

 

They fine-tuned a Llama 13B LLM with military specific data, and claim it works as well as GPT-4 for those tasks.

Not sure why they wouldn't use a more capable model like 405B though.

Something about this smells to me. Maybe a way to stimulate defense spending around AI?

 

...versatile technique that combines a huge amount of heterogeneous data from many of sources into one system that can teach any robot a wide range of tasks

This method could be faster and less expensive than traditional techniques because it requires far fewer task-specific data. In addition, it outperformed training from scratch by more than 20 percent in simulation and real-world experiments.

Paper: https://arxiv.org/pdf/2409.20537

 

With the stated goal of "liberating people from repetitive labor and high-risk industries, and improving productivity levels and work efficiency"

Hopefully they can pull it off cheaply while Tesla's Optimus remains vaporware (or whatever the real world equivalent of vaporware is).

view more: next ›