[-] nulldev@lemmy.vepta.org 3 points 11 months ago

Ah shit, I thought it had reverb but it doesn't seem to :(, my bad.

[-] nulldev@lemmy.vepta.org 5 points 11 months ago

Get any equalizer app (e.g. Poweramp Equalizer).

[-] nulldev@lemmy.vepta.org 2 points 11 months ago

What's wrong with that though? BTC handles forks just fine. Eventually one fork will win out and life will continue on as usual.

The bigger issue this paper presents is that miners become incentivized to mine empty blocks. But can't you just enforce a minimum transaction count on blocks?

[-] nulldev@lemmy.vepta.org 8 points 11 months ago

BTW that still uses Google's proprietary gesture typing library internally: https://github.com/wordmage/openboard/commit/46fdf2b550035ca69299ce312fa158e7ade36967

There's still no good FOSS alternative to Google's library though so it is what it is.

[-] nulldev@lemmy.vepta.org 6 points 11 months ago* (last edited 11 months ago)

It's still bad compared to modern lossless algorithms. PNG is very old and even though PNG encoders have evolved, it is still fundamentally a decade behind modern lossless compression algorithms.

For example: JPEG XL in lossless mode compresses at least 30% better than PNG.

Also, PNG is not actually lossless in many cases. PNG only supports RGB colorspaces. If you try to store anything that's not in an RGB colorspace (e.g. a frame of a video) in a PNG, you will lose some color information as colorspace conversion is not lossless. Example of someone running into this issue: https://stackoverflow.com/q/35399677

JPEG XL supports non-RGB colorspaces so you don't have this problem.

[-] nulldev@lemmy.vepta.org 20 points 11 months ago

JPEG XL came after WebP. It's more of a successor and less of a competitor.

That said, in the world of standards, a successor is still a competitor.

[-] nulldev@lemmy.vepta.org 13 points 11 months ago

No, you want it for scrolling. Scrolling feels much more responsive at 120Hz. It does drain battery more but not by enough to be a deal breaker for most people.

It's useless for videos as most videos are 60Hz.

[-] nulldev@lemmy.vepta.org 5 points 1 year ago* (last edited 1 year ago)

The issue here is that you are describing the goal of LLMs, not how they actually work. The goal of an LLM is to pick the next most likely token. However, it cannot achieve this via rudimentary statistics alone because the model simply does not have enough parameters to memorize which token is more likely to go next in all cases. So yes, the model "builds up statistics of which tokens it sees in which contexts" but it does so by building it's own internal data structures and organization systems which are complete black boxes.

Also, going "one token at a time" is only a "limitation" because LLMs are not accurate enough. If LLMs were more accurate, then generating "one token at a time" would not be an issue because the LLM would never need to backtrack.

And this limitation only exists because there isn't much research into LLMs backtracking yet! For example, you could give LLMs a "backspace" token: https://news.ycombinator.com/item?id=36425375

Have you tried that when it’s correct too? And in that case you mention it has a clean break and then start anew with token generation, allowing it to go a different path. You can see it more clearly experimenting with local LLM’s that have fewer layers to maintain the illusion.

If it's correct, then it gives a variety of responses. The space token effectively just makes it reflect on the conversation.

We’re trying to make a flying machine by improving pogo sticks. No matter how well you design the pogo stick and the spring, it will not be a flying machine.

To be clear, I do not believe LLMs are the future. But I do believe that they show us that AI research is on the right track.

Building a pogo stick is essential to building a flying machine. By building a pogo stick, you learn so much about physics. Over time, you replace the spring with some gunpowder to get a mortar. You shape the gunpowder into a tube to get a model rocket and discover the pendulum rocket fallacy. And finally, instead of gunpowder, you use liquid fuel and you get a rocket that can go into space.

[-] nulldev@lemmy.vepta.org 5 points 1 year ago

Whoops, meant to say: "In many cases, they can accurately (critique their own work)". Thanks for correcting me!

[-] nulldev@lemmy.vepta.org 4 points 1 year ago

Have you even read the article?

IMO it does not do a good job of disproving that "humans are stochastic parrots".

The example with the octopus isn't really about stochastic parrots. It's more about how LLMs are not multi-modal.

[-] nulldev@lemmy.vepta.org 5 points 1 year ago

it just predicts the next word out of likely candidates based on the previous words

An entity that can consistently predict the next word of any conversation, book, news article with extremely high accuracy is quite literally a god because it can effectively predict the future. So it is not surprising to me that GPT's performance is not consistent.

It won't even know it's written itself into a corner

It many cases it does. For example, if GPT gives you a wrong answer, you can often just send an empty message (single space) and GPT will say something like: "Looks like my previous answer was incorrect, let me try again: blah blah blah".

And until we get a new approach to LLM's, we can only improve it by adding more training data and more layers allowing it to pick out more subtle patterns in larger amounts of data.

This says nothing. You are effectively saying: "Until we can find a new approach, we can only expand on the existing approach" which is obvious.

But new approaches come all the time! Advances in tokenization come all the time. Every week there is a new paper with a new model architecture. We are not stuck in some sort of hole.

[-] nulldev@lemmy.vepta.org 8 points 1 year ago

LLMs can't critique their own work

In many cases they can. This is commonly used to improve their performance: https://arxiv.org/abs/2303.11366

0
view more: next ›

nulldev

joined 1 year ago
MODERATOR OF