This is a hard lesson to learn. From now on, my guess is you will have dozens of backups.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
And a development environment. And not touch production without running the exact code at least once and being well slept.
And always use a transaction so you're required to commit to make it permanent. See an unexpected result? Rollback.
Transactions aren't backups. You can just as easily commit before fully realizing it. Backups, backups, backups.
Yes, but
- Begin transaction
- Update table set x='oopsie'
- Sees 42096 rows affected
- Rollback
Can prevent a restore, whereas doing the update with auto commit guarantees a restore on (mostly) every error you make
I've read something like "there are two kinds of people: those who backup and those who are about to"
This doesn’t help you but may help others. I always run my updates and deletes as selects first, validate the results are what I want including their number and then change the select to delete, update, whatever
I learned this one very early on in my career as a physical security engineer working with access control databases. You only do it to one customer ever. 🤷♂️
Pro tip: transactions are your friend
Completely agree, transactions are amazing for this kind of thing. In a previous team we also had a policy of always pairing if you need to do any db surgery in prod so you have a second pair of eyes + rubber duck to explain what you're doing.
Postgres has a useful extension, pg_safeupdate
https://github.com/eradman/pg-safeupdate
It helps reduce these possibilities by requiring a where clause for updates or deletes.
I guess if you get into a habit of adding where 1=1
to the end of your SQL, it kind of defeats the purpose.
MySQL (and by extension, MariaDB) has an even better option:
mysql --i-am-a-dummy
All (doesn't seem like MsSQL supports it, I thought that's a pretty basic feature) databases have special configuration that warn or throw error when you try to UPDATE
or DELETE
without WHERE
. Use it.
I tried to find this setting for postgres and Ms SQLserver, the two databases I interact with. I wasn't able to find any settings to that effect, do you happen to know them?
You're not the first. You won't be the last. I'm just glad my DB of choice uses transactions by default, so I can see "rows updated: 3,258,123" and back the fuck out of it.
I genuinely believe that UPDATE and DELETE without a WHERE clause should be considered a syntax error. If you want to do all rows for some reason, it should have been something like UPDATE table SET field=value ALL.
this folks, is why you don't raw dog sql like some caveman
Me only know caveman. Not have big brain only smooth brain
Always SELECT first. No exceptions.
Better yet... Always use a transaction when trying new SQL/doing manual steps and have backups.
I once dropped a table in a production database.
I never should have had write permissions on that database. You can bet they changed that when clinicians had to redo four days of work because the hosting company or whatever only had weekly backups, not daily.
So, I feel your pain.
There is still the journal you could use to recover the old state of your database. I assume you commited after your update query, thus you would need to copy first the journal, remove the updates from it, and reconstruct the db from the altered journal.
This might be harder than what I'm saying and heavily depends on which db you used, but if it was a transactional one it has to have a journal (not sure about nosql ones).
You all run queries against production from your local? Insanity.
The distinctions get blurry if you’re the sole user.
My only education is a super helpful guy from Reddit who taught me the basics of setting up a back end with nodejs and postgres. After that it's just been me, the references and stack overflow.
I have NO education about actual practises and protocol. This was just a tool I made to make my work easier and faster, which I check in and update every few months to make it better.
I just open vscode, run node server.js to get started, and within server.js is a direct link to my database using the SQL above. It works, has worked for a year or two, and I don't know any other way I should be working. Happy to learn though!
(but of course this has set me back so much it would have been quicker not to make the tool at all)
Everyone has a production system. Some may even have a separate testing environment!
Periodic, versioned backups are the ultimate defense against bugs.
Periodic, versioned and tested backups.
It absolutely, totally, never ever happened to me that I had a bunch of backups available that turned out to be effectively unrestorable the moment I needed them. 😭
The worse feeling than realizing you don't have backup is realizing your backup archives are useless.
Or like that time gitlab found out that none of its 5 backup/replications worked and lost 6 hours of data.
WHO, WHAT, ~~WHERE~~, WHEN, WHY, HOW
who thought it was a good idea to make the where condition in SQL syntax only correct after the set?? disaster waiting to happen
There was a time I worked as a third party for one of the 10 most accessed websites in my country. I got assigned to a project that has been maintained by another third party for 10+ years with no oversight. I have many stories from there but today's is that this company had very strict access control to the production database.
Third parties couldn't access the database directly. All fine and good, except for the fact that we were expected to setup some stuff in the database pretty much every week. The dude who kept this project running for the previous decade, unable to get proper credentials to do his job, instead created an input box in some system that allowed him to run any sql code.
You can already guess the rest of the story, right? For security reasons we had to do things in the least secure way imaginable. Eventually, wheres were forgotten.
But if they had such strict access control they also had backups. Right? Right?!
I did that once when I moved from one DB IDE to another and didn't realise the new one only ran the highlighted part of the query.
there were thousands of medical students going through a long process to find placements with doctors and we had a database and custom state machine to move them through the stages of application and approval.
a bug meant a student had been moved to the wrong state. so I used a snippet of SQL to reset that one student, and as a nervous habit highlighted parts of the query as I reread them to be sure it was correct.
then hit run with the first half highlighted, without the where clause, so everyone in the entire database got moved to the wrong fucking state.
we had 24 hourly backups but I did it late in the evening, and because it was a couple of days before the hard deadline for the students to get their placements done hundreds of students had been updating information that day.
I spent until 4am the next day working out ways to imply what state everyone was in by which other fields had been updated to what, and incidentally found the original bug in the process 😒
anyway, I hope you feel better soon buddy. it sucks but it happens, and not just to you. good luck.
I watched someone make this mistake during a screen share, she hit execute and I screamed "wait! You forgot the where!" Fortunately, it was such a huge database that SQL spun for a moment I guess deciding how it was going to do it before actually doing it, she was able to cancel it and ran a couple checks to confirm it hadn't actually changed anything yet. I don't think anything computer related has ever gotten my adrenaline going like that before or since
In MSSQL, you can do a BEGIN TRAN before your UPDATE statement.
Then if the number of affected rows is not about what you'd expect, doing a ROLLBACK would undo the changes.
If the number of affected rows did look about right, doing a COMMIT would make the changes permanent.
I have done this too. Shit happens.
One of my co-workers used to write UPDATE
statements backwards limit then where etc, to prevent this stuff, feels like a bit of a faff to me.
I always write it as a select, before turning it into a delete or update. I have burned myself too often already.
Thanks for sharing your painfull lesson. I don't directly query DB's a lot, but WHEN I do in the future I'll select first and validate.. Such things can happen so fast. No self hate, man!
you could use dbeaver that warns you for update and delete queries without a where clause, independently of the db system. I hope the functionality it's still there since, for totally unrelated motivations, I always use a where clause, even when buying groceries.
Depending on the database used, the data might still be there, just really hard to recover (as in, its presence is a side-effect, not the intent).
https://stackoverflow.com/a/12472582 takes a look at Postgres case, for example.
I learned the hard way about the beauty of backups and the 3, 2, 1 rule. And snapshots are the GOAT.
Even large and (supposedly) sophisticated teams can make this mistake, so dont feel bad. It’s all part of learning and growth. You have learned the lesson in a very real and visceral way - it will stick with you forever.
Example - a very large customer running our product across multiple servers, talking back to a large central (and shared) DB server. DB server shat itself. They called us up to see if we had any logs that could be used to reconstruct our part of their database server, because it turned out they had no backups. Had to say no.
A gut wrenching mistake, hopefully you'll only make it once!
Oof. Been there, done that, 0 stars; would not recommend.