65
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 04 May 2024
65 points (98.5% liked)
Ask Experienced Devs
1064 readers
1 users here now
Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient
founded 1 year ago
MODERATORS
This is more or less the thoughts I typically hear online, and all makes sense. What I tend to notice interviewing people from big(ger) companies than mine (mostly banks), it sounds like testing for them is mostly about hitting some minimum coverage number on the CI/CD. Probably still has big benefits but it doesn't seem super thoughtful? Or is testing just so important that even testing on autopilot has decent value?
I get that same feeling with frontend testing. Unit testing makes sense to me. Integration testing makes sense but I find it hard to do in the time I have. But frontend testing is very daunting. Now I will only test our data models we keep in the frontend, if I test anything frontend.
Test coverage is useful to measure simply because it’s a metric. You can set standards. You can codify the number into ci/cd. You can observe if the number goes up or down over time. You can argue if these things are valuable but quantifying test coverage just makes it simpler or possible to discuss testing. As people discuss test coverage and building tests becomes normalized, the topic becomes boring. You’ll only get thoughtful discussions on automated testing when somebody establishes a new method, pattern, etc. After that, most tests are very simple. That’s often the point.
Even “testing on autopilot” has high value.
You can build lots of useful front end tests. There are tools for it. But it’s just not possible to test everything because you can’t codify every requirement. E.g. ensure that this ui element is 5 pixels below some other element, except when the window shrinks, and …
I haven’t seen great front end tests. But the ones I’ve seen mostly focus on functionality and flow rather than aiming to cover all possible scenarios. Unit tests are different in this regard.
This is a red flag. Building tests should be a planned part of your work, usually described as acceptance criteria. If you need 4 hours to write a code change, then plan for 8 or whatever so you can build tests. Engineering leaders should encourage this. If they don’t, I would consider that a cultural problem. One that indicates a lack of focus on quality and all of the problems that follow.
Edit: I want to soften my “red flag” comment. That’s a red flag for me. That job isn’t necessary bad. But I would personally not be interested. It’s ok to accept things like, “we don’t write tests and sometimes we deal with issues”. Especially if it’s a good job otherwise.
Nah, red flag is certainly accurate in my case.
We really don't have a strong hierarchy of engineering leaders. Devs are all pretty much equal. EM is extremely hands-off, but also prefers to hire inexperienced developers to "train them up" (which seem like contradictory ideas...). So we we have a very free-for-all development process after work is assigned. And of course very few (zero?) devs really want to start doubling estimates for quality that no one seems to care strongly about.
(The saving grace here, if you can call it that, is that it's very easy to go around leadership and do whatever you want with the dev process, so long as you can do it yourself. So perhaps what I should do is add stricter code coverage checks on the services primarily worked by me as a way to protect me from myself, and maybe convince some others to join in.)