It renders fine, it's just a pain to read due the wide aspect ratio. Either it's too small or you have to scroll horizontal for reach line, or you have to flip your phone. None of it it is optimal.

Fur real? I think you're purrbably right.

This was my biggest takeaway here. Wtf?! "I personally set the price and thought we would make some money"?! Either he is trying to sound cool by being casual or he is a fucking idiot. Or probably both.

[-] pufferfischerpulver@feddit.org 4 points 2 weeks ago

How many fingers? 🖐️

[-] pufferfischerpulver@feddit.org 5 points 2 weeks ago

Fucking real player😩

[-] pufferfischerpulver@feddit.org 12 points 2 weeks ago

Wtf Rome, such tyrants. Never heard of the 2nd amendment or what?!

[-] pufferfischerpulver@feddit.org 15 points 2 weeks ago

TBF iTunes is a terrible player but made the shit loads of money so I guess they achieved what they set out to do.
And I would argue iTunes is the reason for newer media player versions being shit since of course MS saw that there was money to be made and tried to do the same.

[-] pufferfischerpulver@feddit.org 8 points 2 weeks ago

What a bullshit argument. One of the arguments for self driving cars is precisely that they are not doing the same thing humans do. And why should they? It's ludicrous for a company to train them on "social norms" rather than the actual laws of the road. At least when it comes to black and white issues as what is described in the article.

[-] pufferfischerpulver@feddit.org 12 points 2 weeks ago

It's like my brain is not dissimilar from the toddler I have at home. If I don't take charge, come up with t activities, give it a routine, feed it properly, make sure it is hydrated and well slept it turns into a bored, bossy, moody asshole. Of course they, both toddler and brain, still do sometimes but if I do all the things above I'm usually prepared. Sometimes they also are super good at doing all the things by themselves.

In some ways, while extremely exhausting, having a toddler is actually great for my brain because otherwise I would not eat well, make plans, follow routines, go out, or have fun. My brain is still unhappy of course because it wants to do nothing and eat "chips with sugar" (to quote this toddler being I have at home).

[-] pufferfischerpulver@feddit.org 1 points 3 weeks ago

I don't get the game tbh. At first it was cozy then it turned into work. Which I know people like, some at least. But I work enough in the day to not want to work when I play.

[-] pufferfischerpulver@feddit.org 13 points 3 weeks ago

Interesting you focus on language. Because that's exactly what LLMs cannot understand. There's no LLM that actually has a concept of the meaning of words. Here's an excellent essay illustrating my point.

The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.

The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.

view more: next ›

pufferfischerpulver

joined 1 month ago