1024
The birth of JS
(lemmy.world)
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
This is one of the most educational and entertaining reads on the internet, if you are into that kind of thing:
https://github.com/denysdovhan/wtfjs
Has anyone actually read through that? Reading the first few examples and it's just not understanding how languages work half of the time:
Wow, no shit, non-empty string coerces to true, who would've guessed! Did you know that
!!"bullshit" === !!"true"
as well? Mind=blown.Again, no shit, that's in the NaN specification and the page even mentions it, so why even include it?
Which is why I'm of the opinion that dynamically typed languages are evil. !!"false" should either be caught at compile time or raise an exception.
I'm thoroughly convinced that the only use of dynamically typed languages is to introduce bugs
I am with you. To me these are non-obvious details, just a bug waiting to silently happen in production.
Dynamically typed doesn't imply it's monotyped. And monotyped languages can work just fine, you just have to not hide different operations under the same symbols just differing by type like JS does.
The entire problem with JS is that it both is monotyped and it isn't.
Isn't the whole point of dynamic languages that they're monotyped? They're equivalent to a type system with only one type,
any
. Really, most dynamic languages are equivalent to having a single tagged union of all the different sorts of values in the language.If you add additional types, you get into gradual type systems.
A language has dynamic types if the type-resolution is done at runtime. The other kind is static types, where it's done at compile-time.
A language is monotyped if every value is compatible with every operation, so there's actually no type resolution.
A language has explicit types if you declare your types, implicit ones if you can't declare them, type derivation if declarations are optional but they exist and are static you declaring or not, or gradual types if declarations are optional but they exist and are dynamic you declaring them or not.
All of those things are different.
Also, some people will insist "types" can only be static. Go ask those people whatever is the name of the things Python have, because either they just invented some different words, or they are only trying to confuse you.
Bob Harper uses 'unityped' in his post about how dynamic typing is a static type system with a single type in disguise. I've literally never heard "monotyped" used as a term in a dynamic context.
In Types and Programming Languages, Ben Pierce says "Terms like 'dynamically typed' are arguably misnomers and should probably be replaced by 'dynamically checked', but the usage is standard". Generally, you'll see 'tag' used by type theorists to distinguish what dynamic languages are doing from what a static language considers a type.
Type systems have existed as a field in math for over a century and predate programming languages by decades. They do a slightly different sort of thing vs dynamic checking, and many type system features like generics or algebraic data types make sense in a static context but not in a dynamic one.
Hum... Not so much. You can do polymorphism with dynamic types perfectly well, with mechanisms that are exactly equivalent to the ones used on static languages (python people just love that). You can also have tagged unions with a completely equivalent mechanism from algebraic data types (that everybody ends-up doing in json sooner or later). Also you can do out of order verifications in a logical fashion, rewrite code at run time, or anything else even. It's compile-time that has limitations on what it can do, runtime has none.
I am not at all against inventing new names. But I am really against naming concepts that are not independent from each other. And insisting on those is just signaling-oriented pedantry. So you can very well insist that the dynamic thing is not called "type", but you can't really do it in a way that implies the only thing you can get out of them is a type mismatch error.
What kind of runtime tag corresponds to generics, exactly?
Python handles generics essentially the same way that Java 1.0 handles generics: it just kinda punts on it. In Java 1.0, list is a heterogenous collection of Objects. Object, in Java 1.0, allows you to write polymorphic code. But it's not really the same sort of thing; that's why they added generics.
Ish.
There's typing a la Curry, where semantics don't depend on static types and typing a la Church, where semantics can depend on static types.
Haskell's typeclasses, Scala's implicits and Rust's traits are a great example of something that inherently requires Church style typing.
One of the nice things typeclasses let you do is to write functions that are polymorphic on the return type, and the language will automagically pick the right one based on type inference. For example, in Haskell, the result of the expression
fromInteger 1
depends on type ascribed to it. Use it somewhere that expects a double? It'll generate a double. Use it somewhere you expect a complex number? You'll get a complex number. Use it somewhere you're using an automatic differentiation library? You'll get whatever type that AD library defined.That's fundamentally not something you can do in python. You have to go with the manual implementation passing approach, which is incredibly painful so people do it very sparingly.
More to the point, though, limitations have both costs and befits. There's a reason python doesn't have goto and that strings are immutable, even though those are limitations. The question is always if the costs outweigh the benefits or not.
Why? IMO that's perfectly valid. The various type coercions are sometimes crazy, but IMO the rule that non-empty string is coerced to
true
and empty string tofalse
is very simple to follow. The snippet is not even a gotcha, I don't see anything worth failing over. Putting "true" or "false" in a string doesn't change that.I am dumb. The more things I need to think about when reading code that is not the logic of the code, the worse it is. Any time I have to spend thinking about the peculiarities of the way the language handles something is time wasted.
I'll give a very simple example, think like you're trying to find a bug. Assume we're in a dynamic language that allows implicit conversion like this. We can write our code very "cleanly" as follows:
if(!someVar) doSomething();
-> ok, now we gotta check where someVar's value is last set to know what type of data this is. Then I need to remember or look up how those specific types are coerced into a bool.
When trying the same code in a statically typed language that doesn't do implicit coercion that code will fail to run/compile so probably you'll have something like this:
if(someVar.length() == 0) doSomething();
-> this time I can just look at the type of someVar to see it's a string and it's clear what the condition actually means.
The second option is both easier to read and less bug prone even without the type system. It takes maybe 3 seconds longer to type, but if your productivity in coding is that limited by typing speed then I envy you
NaN===NaN
is the fault of the floating point standard tho i believe.Yes, that's my point.
Also, a huge proportion of the list is just not understanding IEEE floats behaviour and blaming the language for it. Exactly like this post is doing. All those weird number things js does is because it only uses floats for everything and every language that uses floats will behave the exact same way.
So, uh, first example, the developer decided instead of comparing based on type, he rather converts the values? Why?!