37
crates.io Postmortem: Broken Crate Downloads
(blog.rust-lang.org)
Welcome to the Rust community! This is a place to discuss about the Rust programming language.
Credits
Why do we have to keep learning to test and automate our tests as hard lessons?
Why do software engineering lectures not teach us about testing? If I were asked to teach software engineering (which TBH I shouldn't be qualified to do just yet) I'd start with testing.
I've always thought it weird that the intro CS course I took at my university didn't even mention unit testing. After being in the industry for several years, it's become obvious that the majority of what I do is just writing tests.
If you wanted to introduce every industry best practice in an intro course you'd never get to the actual programming.
It would be good to have a 1 credit course(one hour a week) where you learn industry best practices like version control, testing and stuff like that. But it definitely shouldn't be at the start.
Hard disagree. Cover less material if needed, but students should get into the habit of writing tests for everything they turn in. If I was a professor, I would reject any submitted code if it didn't have tests, for the same reason that math teachers reject work if students don't show their work.
There's a difference between tests and assertions. Students do test their code, however they don't write assertions, as I said because you want the cognitive load to be as low as possible so that they can master the basics. I'm fine with tests being provided to them, however they should be focusing on learning the constructs at the start.
In any field, the real life practice of a profession is something you learn working for an actual company, whether it's through an internship or an entry level job. Ideally there should be unions or syndicates setting these standards so that they're consistent across the field, just like with other knowledge based professions.
Universities are not corporate training programs, and they aren't supposed to be.
A huge part of computer science is proving correctness, complexity, etc. Almost all of my classes had an automated test suite that your code needed to pass to get full credit for the assignment. I think it's completely reasonable that you "show your work" by writing your own tests from the start.
If programming is just one or two classes of your program (e.g. you're doing IT or something), then I can insurance testing not being a part of it. But if you're going after a formal CS or CS-adjacent degree, you should be in the habit of proving the correctness of your code.
I'm totally fine with other industry norms being ignored, such as code style, documentation, and defensive programming, however, testing should absolutely be a regular part of any form of software development. I want every CS grad to always be thinking in terms of "how can I prove this" instead of just "how can I solve this." I don't think 100% code coverage should be expected, but students should prove the most important part of their solution.
I teachers were using automated tests instead of printf in their intro courses, it would be so much better. I don't think that introducing all the various kind of tests is usefull, but just showing the concept of automated tests instead of manual ones would be a huge step forward.
The thing is the way they motivate new students to learn programming is by having them write programs that do something. Making a test green isn't as motivating as visually seeing the output of your work, and test fixtures can be complex to set up depending on the language. I mean students don't learn how to factor their code into methods until later into such a course, they're learning if statements and for loops and basic programming constructs. Don't you think having to explain setting up test fixtures and dependency inversion is a bit too much for people at that level?
Honestly, that is weird. I wouldn't expect an intro course to go into a lot of depth on testing or even necessarily show how to use a test framework, but I'd expect them to at least have "printf style" unit tests.
But lol, yeah, tests usually take far longer to write than the actual change I made. A one line change might need a hundred lines of test code. And if you're testing something that doesn't already have a similar test that you can start off from, programming the test setup can sometimes take some time. Depends a lot on what your code does, but sometimes you have to setup a whole fake database and a hierarchy of resources with a mixture of real objects with stubs.
And then there's me, who almost never writes unit tests 😬
(With strong typing I can minimize explicit tests, and I like to iterate fast, but I guess it really depends on what you're developing, backend in production that is not allowed to fail, is probably something different than a game)
Unit tests shouldn't be testing types, even if your language isn't typed. It should be testing logic and behavior. If there's an if condition, it should be tested.
Yeah you're right, tests should test logic. But static typing certainly helps reducing a lot of tests, which would be necessary in different untyped languages. Also you can sometimes encode your logic in types. Typing also helps reducing logic issues. But as previously said, it depends on what you're doing. I'm prototyping/researching a lot, and tests often hinder progress for me. Maintaining a backend in production is a different story.
That is absolutely true as well. We're porting a codebase to TypeScript and we were able to eliminate a bunch of test cases that were essentially testing type-correctness (e.g. can't pass a boolean to a date processing library). But those were bad tests to begin with, because there was no good reason for those tests to exist to begin with (we were pretty exhaustive with the invalid type checking even when the intended types were obvious).
Strict typing helps eliminate useless tests. And Rust types go further than most languages, such as exhaustive match, types that can exclude zero, and the near-complete lack of a null value.
If you're never going to publish the code, I agree, tests aren't necessarily helpful. Then again, I find writing tests helps me understand my own code better, so I still do it when doing research tasks (e.g. we were testing the potential performance benefits of porting an expensive algorithm to Rust, so my tests helped me benchmark it), though my tests are a lot less exhaustive and tend to be more happy path integration tests instead of proper unit tests.
Hmm interesting, I try to optimize readability of the actual code itself, so that when I read it again after some time, that I quickly get what this is about, if there's a edge-case or something I thought about while coding, I'll just add a TODO comment or something like that. I feel like reading tests is a "waste of time" for me most of the time (hard take I know ^^).
But all this obviously only applies for researching and fluid code (code that likely will be refactored/rewritten soon), when it's solid code targeting production etc. I'll add unit tests if friction/hassle is low, and integration/E2E tests for the overall thing. But as I said, I'm mostly in fluid/fast moving codebases that are more difficult to test (e.g. because it does gpu rendering or something like that).
When I jump into a new codebase, my first instinct is to look over the examples and the unit tests to get a feel for how things are intended to work.
When prototyping, I generally write a few "unit" tests as a form of example code to show basic usage. For example, if I'm writing a compiler for a new toy language, I'll write some unit tests for each basic feature of the language. If I'm writing networking code (e.g. a game server), I'll write some client and server tests that demonstrate valid and invalid packets. These generally fall somewhere between unit and integration tests, but I do them in a unit test style. When the project stabilizes, I'll go through and rewrite those tests to be narrower in scope, broader in line coverage, and simpler, and keep a few as examples (maybe extract to the readme or something).
That's my workflow, and I like knowing that at least part of it is tested. When I mess with stuff, I have a formal step to change the tests as a form of documenting the change I made, and I'll usually leave extensive comments on the test to describe the relevance.
Code readability counts, but I don't think it's enough. The codebase I work on day to day is quite readable, but it's very complex since there are hundreds of thousands of lines of code across over a dozen microservices, and there's a lot of complexity at the boundaries. When I joined the project, I read through a lot of the tests, which was way more helpful to me than reading the code directly. The code describes "how," but it doesn't explain "what" or "why." Tests get into "what" extensively, and "why" can be understood by the types of tests developers choose to write.
Ok, thinking about it (since I wrote a toy language not so long ago), this is probably a perfect example where unit tests make sense almost everywhere (even for prototyping, say parser).
I think it definitely depends what you're doing, writing unit tests for prototype graphics (engine) code is no fun (and I see no real benefit).
I think it depends, For general architecture, E2E or integration tests definitely make sense, for finer-grained code, I think documentation (Rust doc) of the functions in question should be enough to understand what they do (including some examples how to use them, could be tests, often examples (similar as in std rust) in the rust doc are enough IMHO, and otherwise the code itself is the documentation (being able to read code fast is a valuable skill I guess). That obviously doesn't apply for everything (think about highly theoretical computer science or math code), but yeah it depends...
Yeah, I wouldn't bother for graphics code either. For that, I want compilable examples, and that's about it.
I do a lot of math and parsers, and that lends itself very well to unit tests.
Strong typing doesn't prevent the need for tests. It can certainly catch some issues (and I don't like dynamically typed languages as a result), but there's no replacement for unit testing. So much refactoring is only safe because of rigorous test coverage. I can't begin to tell you how many times a "safe" refactoring actually broke something and it was only thanks to unit tests that I found it.
If code is doing anything non-trivial, tests are pretty vital for ensuring it works as intended (and for ensuring you don't write too much code before you realize something doesn't work). Sure, you can manually test, but often manual testing can have a hard time testing edge cases. And manual testing won't help you prevent regressions, which is usually the biggest reason to write unit tests. If you have a big, complicated system worked on by more than one person, tests can be critical for ensuring other people (who often have no idea how your code works) don't break your test. Plus your own future changes.
Funny how you got successfully distracted by the procedural failure dance, where the obvious, as expected, got zero mentions. Giving software engineering lectures seems to be right up your alley.
If I was the author of that commit, or any
crates.io
developer, I would have wanted to be called out for not constructing URLs correctly. That's the obvious first fault here. Not even hinting at that would have felt so cringe.I can’t tell if your comment is intentionally sarcastic but it sure sounds like you’re saying “just don’t write buggy code in the first place!”
It's about not ignoring the clear underlying cause of the bug that is screaming at everyone who reads the bug description.
Include something along the lines of "We will use the URL crate and utilize its API to avoid trivial URL construction errors like this one in the future", and I may take your postmortem seriously.
A flawless developer does not exist, and at no point did I fault any developer directly for their development work. But that doesn't mean we should ignore something that is/was clearly and inherently wrong with the code. You would think this is all stating the obvious.
So it's not "just don’t write buggy code in the first place!”. It's "this code could clearly have been written in a way that would have prevented this bug from ever taking place".
And yes, good code matters. A good language matters. A good type system matters. A good use of a good language with its type system, patterns, abstractions, ecosystem, and all it got to offer matters. This is Rust afterall. If those things don't matter, then we might as well let the code be written in Python or JS, and fully recommit to the church of TDD.
That basically is the same as saying "next time we will write correct code" in your postmortem, which I don't think is very useful. It's much more useful to say "our code is not structured in a way that makes testing easy" and "our smoke tests should cover the thing that broke." That gives you something actionable to work on that will actually prevent this from happening in the future. Otherwise, you'll end up writing essentially the same postmortem over and over again, each time saying "we will write correct code."
False dichotomy much!
See this postmortem from Cloudflare as an example.
Under "What went wrong", point 1 and 3:
And on what needed to done, point 4
See! Plenty of procedural talk in that postmortem. Plenty of corporate talk too. But you have to mention that a bad backtracking regex was used. And you have to mention that using regexes with no complexity guarantees was glaringly wrong. To not have done so would have been silly. To not even come close to mentioning those things beyond the specific error in that specific regex, and you wouldn't have been taken seriously.
> A good language matters. A good type system matters. A good use of a good language with its type system, patterns, abstractions, ecosystem, and all it got to offer matters.
Eh research shows otherwise. Rust eliminates defects for a very particular set of problems, but when it comes to logical correctness it isn't better or worse than other languages. If those problems are prominent in your domain(such as you have to write a ton of concurrent code), Rust makes sense. Otherwise being well rested will have a bigger impact on the quality of your code than the best type system in the world.
In terms of dev practices, the only practice demonstrated to have a consistent positive impact on code quality is code reviews. Testing as well, but whether it's TDD or other kinds of testing doesn't really matter.
Can you share that research?
https://youtu.be/WELBnE33dpY
It's not that there is evidence that it doesn't matter, but there is no evidence showing that it does.
Can you concede, at least to yourself, that you made ^ this ^ up?
By the way, what you claimed "research shows" is so ridiculous that it's hilarious that you wrote it while being serious.
Hell, I cheekily mentioned Python and JS in particular because the former introduced type hints and the latter triggered creating TS as a saner shield.
Btw, that wrongly-constructed URL wasn't even an external one. We literally have web frameworks that make sure non-external URLs with invalid paths are impossible to construct. In other words, attempting to construct a wrong one would be a compile error.
There is still no research that definitively shows that static types reduce defects more than dynamic types, this is a fact. Turns out we are incredibly bad at studying this, so I don't know how you can say definitively that it is the case when even the people who study this for a living are not able to make that case.
Come on. What was requested by the other user is clear, I think.
You made this specific claim. Can you link to the research showing that? Actual research showing that "Rust eliminates defects for a very particular set of problems, but when it comes to logical correctness it isn’t better or worse than other languages", not a YT video from a wannabe intellectual talking abstracts and siting some generic studies.
It was my mistake, I said that we definitely know they don't vs. there is no evidence showing that there is. There aren't much studies to back this up. The whole point of the talk is that software engineering as a discipline is really poorly studied and we tend to make assertions like this without actually validating them.
If I was betting money on this(I.e. deciding where to focus my investment), the quality of the typesystem would only matter if the typesystem caught real problems that I face in my day to day work. For a Web app for instance, it makes no sense to use Rust vs a GC'd language because the kinds of bugs that you face in Web apps aren't really the kinds of issues that a borrow checker will help you with. The whole point of Rust being difficult is that it saves you time down the line, if it's difficult and it doesn't then that tradeoff doesn't make sense.
Hilel teaches formal verification for a living, he very much sees the value of automatically proving properties about your program, as do I, but the reality is that the typesystem doesn't necessarily help as much as we think it does.
It's quite reductionist (and weird) to describe Rust's type system in terms of it's borrow-checker only, ditto for describing the other simply as "GC'd languages".
The borrow-checker together with move semantics and RAII are a small, if dominating -especially for beginners-, part of Rust's type system. There are many other very relevant aspects, type classes (traits), sum types (enums), hell, not being OOP alone is a big win for many.
Talking about Rust only in terms of its type system would also be reductionist. The macro system alone is a big differentiator (I would know, because I've been working on a proc-macro crate for sometime which will support (de-)serializing a format with both more flexibility and reliability than what
serde
can offer).Even talking about Rust only in terms of the language is reductionist! The ecosystem and tooling... okay, I will stop reaching further here.
Talking about the other as "GC'd languages" is also reductionist and weird, since it puts, for example, Go (lol) and Haskell on the same bracket. And that's two strongly and statically typed languages. I could have picked two languages that are even more different than each other. "GC" as a differentiator for languages is actually even more reductionist than "borrow checker" for Rust.
Neither Rust is actively trying to be difficult, nor is it really difficult beyond some early friction while learning the language, and even that friction is overblown often by many.
We do actually have some data on this (I wouldn't dare calling it "research"). There is this part which actually relates to one of the arguments made in your YouTube link:
Here is an alternative Piped link(s): https://piped.video/WELBnE33dpY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source, check me out at GitHub.
Honestly
url.join
looks like a cluster-fuck of landmines to me: https://github.com/servo/rust-url/issues/333I’d probably have just stuck with strings as well.
And this argument works as long as nothing wrong happens. Well, something wrong happened ;)
Smashing strings together is how this bug happened.
Constructing URLs reliably should have been the obvious first takeaway, was my point, instead of pretending the issue is not there. If
Url::join()
is somehow too confusing for some, then there are other ways to do it with simpler API, no problem.> I would have wanted to be called out for not constructing URLs correctly.
You might have overlooked that we do not start out as experts. It is simply impossible. There is no way to guarantee that we know how to do things 100% correctly before writing correct code. Even if we were experts, we're still humans, we'll screw something up. This is just one of the reasons why we write proper tests and automate them.