This is fascinating.
- The feedback loop they describe sounds a lot like model collapse. They can play whack a mole with the trends they can see, but what about the more subtle forms?
- They're now filtering goblin-related training data, which also tells me that maybe we can use lots of goblin references as a way to opt out of our written content being used to train their models, in our writing and in our code.
