That's not how copyright works (at least not in the US). when a corporation creates a copyrighted work (by way of paying the person(s) that actually made it), the duration is set as 120 years after creation or 95 years after publication. The lifetime of any employee is not taken into account. When a copyright is made by a person, it lasts until 70 years after that person dies. You cannot swap out that person for someone else, even if the owner of the copyright changes.
You are probably thinking of a method that is used to make private agreements last basically forever. A private contract technically isn't allowed to last forever, there has to be some point of expiration. To make a contract last forever anyway, they pick some condition that probably won't happen for a ridiculous amount of time, such as when the last descendant of the king of England dies (I assume they use this because the royal family keeps good genealogy records). If a currently living person is required, they might pick some infant relative to make it last as long as possible.
You are misrepresenting a lot of stuff here.
This entirely depends on the quality of the AI and the task at hand. A well made AI can be relatively predictable. However, most tasks that AI excels at are tasks which themselves do not have a predictable solution. For instance, handwriting recognition can be solved by a neural network with much better than human accuracy. That task does not have a perfect solution, and there is not an ideal answer for each possible input (one person's 'a' could look exactly the same as another's 'o'). The same can be said for almost all games, especially those involving a human player.
Unpredictable things can be tested. That's pretty much what the entire field of statistics and probability is about. Also, testability is a fundamental requirement for any kind of machine learning. It isn't just a good practice kind of thing; if you can't test your model, you don't even have a model in the first place. The whole point is to create many candidate models and test them to find the best one.
A neural network only knows what you tell it. If you don't tell it where the player is, it's not going to magically deduce it from nothing. Also, it's output has to be interpreted to even be used. The raw output is a vector of numbers. How this is transformed into usable actions is entirely up to the developer. If that transformation allows violating the rules, that's the developers fault, not the networks. The same can be said of human input; it is the developers responsibility to transform that into permissable actions in game.
That is possible. Which is why you should make a performance metric that reflects what you actually want it to try to do. This is a very common issue and is just part of the process of making an AI. It is not an insurmountable problem.
Neural networks have been used to play countless games before. It's probably one of the most studied use cases simply because it is so easy to do.