this post was submitted on 04 Apr 2024
1114 points (98.1% liked)

Programmer Humor

19622 readers
71 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] expr@programming.dev 5 points 7 months ago (2 children)

Yeah it is something people should take time to learn. I do think its "dangers" are pretty overstated, though, especially if you always do git rebase --interactive, since if anything goes wrong, you can easily get out with git rebase --abort.

In general there's a pretty weird fear that you can fuck up git to the point at which you can't recover. Basically the only time that's really actually true is if you somehow lose uncommitted work in your working tree. But if you've actually committed everything (and you should always commit everything before trying any destructive operations), you can pretty much always get back to where you were. Commits are never actually lost.

[–] ipkpjersi@lemmy.ml 4 points 7 months ago

True, the real danger is using git reset with the --hard flag when you haven't committed your changes lol

[–] thanks_shakey_snake@lemmy.ca 2 points 7 months ago (1 children)

You can get in some pretty serious messes, though. Any workflow that involves force-pushing or rebasing has the potential for data loss... Either in a literally destructive way, or in a "Seriously my keys must be somewhere but I have no idea where" kind of way.

When most people talk about rebase (for example) being reversible, what they're usually saying is "you can always reverse the operation in the reflog." Well yes, but the reflog is local, so if Alice messes something up with her rebase-force-push and realizes she destroyed some of Bob's changes, Alice can't recover Bob's changes from her machine-- She needs to collaborate with Bob to recover them.

[–] expr@programming.dev 1 points 7 months ago (2 children)

Pretty much everything that can act as a git remote (GitHub, gitlab, etc.) records the activity on a branch and makes it easy to see what the commit sha was before a force push.

But it's a pretty moot point since no one that argues in favor of rebasing is suggesting you use it on shared branches. That's not what it's for. It's for your own feature branches as you work, in which case there is indeed very little risk of any kind of loss.

[–] thanks_shakey_snake@lemmy.ca 2 points 7 months ago (1 children)

Ah, you've never worked somewhere where people regularly rebase and force-push to master. Lucky :)

I have no issue with rebasing on a local branch that no other repository knows about yet. I think that's great. As soon as the code leaves local though, things proceed at least to "exercise caution." If the branch is actively shared (like master, or a release branch if that's a thing, or a branch where people are collaborating), IMO rebasing is more of a footgun than it's worth.

You can mitigate that with good processes and well-informed engineers, but that's kinda true of all sorts of dubious ideas.

[–] expr@programming.dev 1 points 7 months ago (1 children)

Pushing to master in general is disabled by policy on the forge itself at every place I've worked. That's pretty standard practice. There's no good reason to leave the ability to push to master on.

There's no reason to avoid force pushing a rebased version of your local feature branch to the remote version of your feature branch, since no one else should be touching that branch. I literally do this at least once a day, sometimes more. It's a good practice that empowers you to craft a high-quality set of commits before merging into master. Doing this avoids the countless garbage fix typo commits (and spurious merge commits) that you'd have otherwise, making both reviews easier and giving you a higher-quality, more useful history after merge.

[–] aubeynarf@lemmynsfw.com 1 points 6 months ago

Why should no one be touching it? You’re basically forcing manually communicated sync/check points on a system that was designed to ameliorate those bottlenecks

[–] aubeynarf@lemmynsfw.com 1 points 6 months ago* (last edited 6 months ago) (1 children)

If “we work in a way that only one person can commit to a feature”, you may be missing the point of collaborative distributed development.

[–] expr@programming.dev 1 points 6 months ago

No, you divide work so that the majority of it can be done in isolation and in parallel. Testing components together, if necessary, is done on integration branches as needed (which you don't rebase, of course). Branches and MRs should be small and short-lived with merges into master happening frequently. Collaboration largely occurs through developers frequently branching off a shared main branch that gets continuously updated.

Trunk-based development is the industry-standard practice at this point, and for good reason. It's friendlier for CI/CD and devops, allows changes to be tested in isolation before merging, and so on.