[-] ZickZack@kbin.social 27 points 1 year ago

That's not what lossless data compression schemes do:
In lossless compression the general idea is to create a codebook of commonly occuring patterns and use those as shorthand.
For example, one of the simplest and now ancient algorithms LZW does the following:

  • Initialize the dictionary to contain all strings of length one.
  • Initialize the dictionary to contain all strings of length one.
  • Emit the dictionary index for W to output and remove W from the input.
  • Add W followed by the next symbol in the input to the dictionary.
  • repeat
    Basically, instead of rewriting long sequences, it just writes down the index into an existing dictionary of already seen sequences.

However, once this is done, you now need to find an encoding that takes your characterset (the original characters+the new dictionary references) and turns it into bits.
It turns out that we can do this optimally: Using an algorithm called Arithmetic coding we can align the length of a bitstring to the amount of information it contains.
"Information" here meaning the statistical concept of information, which depends on the inverse likelihood a certain character is observed.
Logically this makes sense:
Let's say you have a system that measures earthquakes. As one would expect, most of the time, let's say 99% of the time, you will see "no earthquake", while in 1% of the cases you will observe "earthquake".
Since "no earthquake" is a lot more common, the information gain is relatively small (if I told you "the system said no earthquake", you could have guessed that with 99% confidence: not very surprising).
However if I tell you "there is an earthquake" this is much more important and therefore is worth more information.

From information theory (a branch of mathematics), we know that if we want to maximize the efficiency of our codec, we have to match the length of every character to its information content. Arithmetic coding now gives us a general way of doing this.

However, we can do even better:
Instead of just considering individual characters, we can also add in character pairs!
Of course, it doesn't make sense to add in every possible character pair, but for some of them it makes a ton of sense:
For example, if we want to compress english text, we could give a separate codebook entry to the entire sequence "the" and save a ton of bits!
To do this for pairs of characters in the english alphabet, we have to consider 26*26=676 combinations.
We can still do that: just scan the text 600 times.
With 3 character combinations it becomes a lot harder 26*26*26=17576 combinations.
But with 4 characters its impossible: you already have half a million combinations!
In reality, this is even worse, since you have way more than 26 characters: you have things like ", . ? ! and your codebook ids which blow up the size even more!

So, how are we supposed to figure out which character pairs to combine and how many bits we should give them?
We can try to predict it!
This technique, called [PPM](Prediction by partial matching) is already very old (~1980s), but still used in many compression algorithms.
The important trick is now that with deep learning, we can train even more efficient estimators, without loosing the lossless property:
Remember, we only predict what things we want to combine, and how many bits we want to assign to them!
The worst-case scenario is that your compression gets worse because the model predicts nonsensical character-combinations to store, but that never changes the actual information you store, just how close you can get to the optimal compression.

The state-of-the-art in text compression already uses this for a long time (see Hutter Prize) it's just now getting to a stage where systems become fast and accurate enough to also make the compression useful for other domains/general purpose compression.

[-] ZickZack@kbin.social 8 points 1 year ago

It's because this article is garbage: of you watch the original German video what he says is

Yuki is ein junger, aufstrebender, vor allem der beste Japaner.

Which translates to

Yuki is a young rising star and the best Japanese driver.

Which reads more like referring to iwasa who is also in the RB juniors program.

[-] ZickZack@kbin.social 15 points 1 year ago

The car is the same as last week.
You have to remember that this is a track that verstappen really doesn't like: last year's race at Singapore was also his worst.
Usually verstappen drives ~3 tenths faster than Perez, which, if he did that this week, would also put him up there....

IMO this is less of a case that the car is worse and more that verstappen isn't able to get 100% from his car.

[-] ZickZack@kbin.social 8 points 1 year ago

24, always driven manual, EU.
From my experience most people in the EU can or at least could: This is because many (if not all, not sure) countries make a distinction between manual and automatic licenses (see e.g. https://www.learn-automatic.com/qualified/automatic-driving-licence/).
I.e. if you want to drive manual, you have to take the test manual, but if you take the test on manual transmission, you are allowed to drive automatics as well.

[-] ZickZack@kbin.social 12 points 1 year ago

No, it's built into the protocol: think of it like as if every http request forces you to attach some tiny additional box containing the solution to a math puzzle.

The twist is that you want the math puzzle to be easy to create and verify, but hard to compute. The harder the puzzle you solve, the more you get prioritized by the service that sent you the puzzle.

If your puzzle is cheaper to create than hosting your service is, then it's much harder to ddos you since attackers get stuck at the puzzle, rather than getting to your expensive service

[-] ZickZack@kbin.social 10 points 1 year ago

Standard lossless compression (without further assumptions) is already very close to being as optimal as it can get: At some point the pure entropy of these huge datasets just is not containable anymore.

The most likely savior in this case would be procedural rendering (i.e. instead of storing textures and meshes, you store a function that deterministically generates the meshes and textures). These already are starting to become popular due to better engine support, but pose a huge challenge from a design POV (the nice e.g. blender-esque interfaces don't really translate well to this kind of process).

[-] ZickZack@kbin.social 15 points 1 year ago

It's a different paper (e.g. https://www.nature.com/articles/s41586-022-05294-9) from a different researcher (specifically Ranga Dias). This is not connected to the recent non-peer reviewed https://arxiv.org/abs/2307.12008

[-] ZickZack@kbin.social 7 points 1 year ago

I hope that heads roll at haas for that disaster. There's one thing making wrong choices in high-stakes scenarios, but they sent out their drivers too late twice within 24h. That's just an unforced, unexplainable blunder. If I were gene haas I'd be furious: spend 100 MILLION dollars to develop a car and they don't even manage to get it around the track once.

This is doubly bad considering that sprints are one of their most reliable places to get points considering that their tire wear doesn't affect them too much over shorter distances.
They might as well pack up and go home now to conserve their parts since at this point there not going to achieve anything anyways.

[-] ZickZack@kbin.social 9 points 1 year ago

It's $\mathbb{X}$ or unicode 𝕏 (U+1D54F)
Maybe he really likes metric spaces??

[-] ZickZack@kbin.social 24 points 1 year ago

They will make it open source, just tremendously complicated and expensive to comply with.
In general, if you see a group proposing regulations, it's usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn't really have a technical edge against anyone else, therefore they run to congress to "please regulate us".
Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.

There are so many ways this can be broken in intentional or unintentional ways. It's also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
This is not literally trying to force everyone to get a license for producing creative or factual work but it's very close since you can easily discriminate against any creative or factual sources you find unwanted.

In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.

[-] ZickZack@kbin.social 7 points 1 year ago

That paper makes a bunch of(implicit) assumptions that make it pretty unrealistic: basically they assume that once we have decently working models already, we would still continue to do normal "brain-off" web scraping.
In practice you can use even relatively simple models to start filtering and creating more training data:
Think about it like the original LLM being a huge trashcan in which you try to compress Terrabytes of mostly garbage web data.
Then, you use fine-tuning (like the instruction tuning used the assistant models) to increases the likelihood of deriving non-trash from the model (or to accurately classify trash vs non-trash).
In general this will produce a datasets that is of significantly higher quality simply because you got rid of all the low-quality stuff.

This is not even a theoretical construction: Phi-1 (https://arxiv.org/abs/2306.11644) does exactly that to train a state-of-the-art language model on a tiny amount of high quality data (the model is also tiny: only half a percent the size of gpt-3).
Previously tiny stories https://arxiv.org/abs/2305.07759 showed something similar: you can build high quality models with very little data, if you have good data (in the case of tiny stories they generate simply stories to train small language models).

In general LLM people seem to re-discover that good data is actually good and you don't really need these "shotgun approach" web scrape datasets.

[-] ZickZack@kbin.social 10 points 1 year ago

I think you also have to keep in mind the position that de Vries and redbull is in:

  • Redbull is looking for a second verstappen-level driver. That's always been the case not only for redbull, but all tier 1 teams: Their aspirations are championships, not points or even podiums.
  • De Vries is a 28 year old rookie. That's usually the time that drivers retire or lean on their superior experience to make up for their loss in reaction speed and overall pace. The problem is that De Vries has no experience, while being older than Verstappen by close to three years. The fact that he got to race at all is a miracle: He would have to beat Tsunoda every week by quite a margin to become relevant for RedBull. If he doesn't become relevant for redbull, then why have him at alpha tauri?

Meanwhile they have a young driver in the form of tsunoda which exists in a limbo due to him having nothing to compare against: He could be the fastest driver on the planet in a trash car, or he could be underdelivering without anyone noticing due to the lack of comparison.
This is bad for two reasons:

  1. you don't know whether tsunoda is an option for redbull
  2. you have no idea how good alpha tauri is over all, which is doubly bad considering that they want to make major changes to how alpha tauri operates.

On the other hand, you have a perfectly good Ricciardo sitting on his hands that performed really well at silverstone. Realistically, you aren't going to lose anything from having Riccardo drive the rest of the season compared to having de Vries drive, but you have to potential upside of more context to the quality of tsunoda and the team, which you wouldn't get otherwise.

In general I'm more suprised that they ever gave De Vries a chance considering his age and the context to his big achievements:
In formula 2 his stiffest competitor was Nicholas Latifi (He won with 266 vs Latifi's 214 points) in what can be described as a dud year after the majority of now F1 mainstays had already graduated (he also needed 3 years to win F2, which is never a good sign).
If you have ever seen an formula E race, you will notice that it is quite a chaotic crash-fest with very weird rules and other nonsense. Just not crashing and not driving to quickly can get you really far by surviving the carbon-fiber mayhems and fuel-conservation issues.
To put it into perspective, here are the race records in the year that De Vries won formula E [1st, 9th, retired, retired, 1st, 16th, retired, 9th, retired, 13th, 18th, 2nd, 2nd, 22nd, 8th] or, in short if we ignore all DNFs we get a mean position of 9th!

In short, there's a reason why Mercedes never even tried to get him an F1 spot: He's not a bad driver, but being "not a bad driver" is insufficient for a top team like mercedes and redbull. There's little incentive to put him into any car, even less so nowadays considering his age.

view more: next ›

ZickZack

joined 1 year ago