More specifically, I'm thinking about two different modes of development for a library (private to the company) that's already relied upon by other libraries and applications:
- Rapidly develop the library "in isolation" without being slowed down by keeping all of the users in sync. This causes more divergence and merge effort the longer you wait to upgrade users.
- Make all changes in lock-step with users, keeping everyone in sync for every change that is made. This will be slower and might result in wasted work if experimental changes are not successful.
As a side note: I believe these approaches are similar in spirit to the continuum of microservices vs monoliths.
Speaking from recent experience, I feel like I'm repeatedly finding that users of my library have built towers upon obsolete APIs, because there have been multiple phases of experimentation that necessitated large changes. So with each change, large amounts of code need to be rewritten.
I still think that approach #1 was justified during the early stages of the project, since I wanted to identify all of the design problems as quickly as possible through iteration. But as the API is getting closer to stabilization, I think I need to switch to mode #2.
How do you know when is the right time to switch? Are there any good strategies for avoiding painful upgrades?
I'm not suggesting that my library is unversioned. It's totally version controlled, and users can upgrade to whatever revision they want to pull in the changes. The changes I make upstream don't affect anyone downstream until they decide to upgrade. It could even adhere to SemVer, but my problem remains: how to minimize rewriting user code? Is it better to have more small upgrades or fewer large upgrades? When is one strategy preferable to another?
SemVer seems logical. If most of your changes are breaking, I don't think it really matters if you are releasing often or occasionally though.. If it's often, the users will get fatigued with upgrades. If it's occasionally, they'll be overwhelmed and push it off.
If most of your changes are breaking, you should just disclose that the software is in an alpha/beta state and that it can't be depended on to remain consistent until you have a defined policy about what gets released and when. It will be up to the users to decide if they are comfortable with those terms.
Using git is not having a versioned library for what it's worth. Users can't get the latest fixes by picking a newer commit without building against the changes you put into your libraries apis. It sounds like your library is indeed entirely unversioned.
I also do SemVer-compliant releases. Backporting fixes is possible.
It doesn't change the fact that there are large breaking changes between versions. The only users of my library are within the same company, and we have all of the convenience of planning out the changes together.
The challenge arises from developers doing large amounts of R&D on separate but coupled libraries. My library happens to be a very central dependency.
If your library is a core dependency and it is constantly having breaking updates then something is deeply unwell in the environment. That is not sustainable, and it sounds like your library was created without a clear idea of what it should do.
I've been around long enough to know these things happen, but you're not going to find a good way forward because there isn't one. This is going to be a pain point until either the library is stable or the project fails.
I think it's close to stability. And the scope of the library hasn't changed. It's just solving a complex problem that requires several very large data structures, and I've needed to address a couple important issues along the way.