I know it won't be popular to say this here but I think AI music is going to eventually reach a point where it is capable of doing something interesting.
What it needs in order to do that however is to merge biological data with what it does. If you start feeding in heart rate, sweat and other data into machine learning on top of generation it will eventually find some genuinely interesting ways to use sound and music to manipulate biological response.
That might not even sound like music as we know it though so it will be interesting in and of itself.
Ultimately generative AI without that data is going to be at a permanent disadvantage to human creators who are using their knowledge of biological response to music when composing. It's a logical step I think we'll see.

