513
top 50 comments
sorted by: hot top controversial new old
[-] xmunk@sh.itjust.works 87 points 6 months ago

Hell yeah, brother. Functional programmers rise up.

[-] BatrickPateman@feddit.de 34 points 6 months ago

As long as you do it without side effects...

[-] wewbull@feddit.uk 20 points 6 months ago
    total_armageddon = launch_nuclear_missile <$> [1...]
[-] Patches@sh.itjust.works 10 points 6 months ago* (last edited 6 months ago)

Deleted by creator

[-] FooBarrington@lemmy.world 9 points 6 months ago

Return a list of cloned functional programmers with their positions translated towards positive y!

[-] xmunk@sh.itjust.works 4 points 6 months ago

Thankfully our immutability makes us immune to fall damage.

[-] devfuuu@lemmy.world 5 points 6 months ago* (last edited 6 months ago)

Writer monads am I right? ๐Ÿ˜…

[-] TheBananaKing@lemmy.world 13 points 6 months ago
[-] UnRelatedBurner@sh.itjust.works 3 points 6 months ago

I said fuck it, should be a good time passer. 20ish minutes later I want to be a systems programmer. Sounds fun, what can I say.

[-] marcos@lemmy.world 1 points 6 months ago

Well, then I must point that Haskell is one of the best languages out there for system hacking...

[-] devfuuu@lemmy.world 6 points 6 months ago

Haven't used a loop in almost a decade! It's a nice life ๐Ÿ˜Ž

[-] drislands@lemmy.world 2 points 6 months ago

Hell yeah! Groovy programmer here, mapping closures over lists of objects.

load more comments (1 replies)
[-] Blackmist@feddit.uk 78 points 6 months ago

As your compiler patiently turns it back into a loop.

[-] intensely_human@lemm.ee 10 points 6 months ago
[-] MareOfNights@discuss.tchncs.de 42 points 6 months ago

I never looked into this, so I have some questions.

Isn't the overhead of a new function every time going to slow it down? Like I know that LLVM has special instructions for Haskell-functions to reduce overhead, but there is still more overhead than with a branch, right? And if you don't use Haskell, the overhead is pretty extensive, pushing all registers on the stack, calling new function, push buffer-overflow protection and eventual return and pop everything again. Plus all the other stuff (kinda language dependent).

I don't understand what advantage is here, except for stuff where recursive makes sense due to being more dynamic.

[-] technom@programming.dev 49 points 6 months ago

They aren't talking about using recursion instead of loops. They are talking about the map method for iterators. For each element yielded by the iterator, map applies a specified function/closure and collects the results in a new iterator (usually a list). This is a functional programming pattern that's common in many languages including Python and Rust.

This pattern has no risk of stack overflow since each invocation of the function is completed before the next invocation. The construct does expand to some sort of loop during execution. The only possible overhead is a single function call within the loop (whereas you could have written it as the loop body). However, that won't be a problem if the compiler can inline the function.

The fact that this is functional programming creates additional avenues to optimize the program. For example, a chain of maps (or other iterator adaptors) can be intelligently combined into a single loop. In practice, this pattern is as fast as hand written loops.

[-] ebc@lemmy.ca 14 points 6 months ago

A great point in favour of maps is that each iteration is independent, so could theoretically be executed in parallel. This heavily depends on the language implementation, though.

[-] noli@programming.dev 3 points 6 months ago

Technically this is also possible with for loops, like with OpenMP

[-] marcos@lemmy.world 3 points 6 months ago

Imperative for loops have no guarantee at all that iterations could be executed in parallel.

You can do some (usually expensive, and never complete) analysis to find some cases, but smart compilers tend to work the best the dumbest you need them to be. Having a loop that you can just blindly parallelize will some times lead to it being parallel in practice, while having a loop where a PhD knows how to decide if you can parallelize will lead to sequential programs in practice.

[-] noli@programming.dev 2 points 6 months ago

While you do have a fair point, I was referring to the case where one is basically implementing a map operation as a for loop.

[-] noli@programming.dev 22 points 6 months ago

Compiler optimizations like function inlining are your friend.

Especially in functional languages, there are a lot of tricks a compiler can use to output more efficient code due to not needing to worry about possible side effects.

Also, in a lot of cases the performance difference does not matter.

[-] expr@programming.dev 9 points 6 months ago

I'm not familiar with any special LLVM instructions for Haskell. Regardless, LLVM is not actually a commonly used backend for Haskell (even though you can) since it's not great for optimizing the kind of code that Haskell produces. Generally, Haskell is compiled down to native code directly.

Haskell has a completely different execution model to imperative languages. In Haskell, almost everything is heap allocated, though there may be some limited use of stack allocation as an optimization where it's safe. GHC has a number of aggressive optimizations it can do (that is, optimizations that are safe in Haskell thanks to purity that are unsafe in other languages) to make this quite efficient in practice. In particular, GHC can aggressively inline a lot more code than compilers for imperative languages can, which very often can eliminate the indirection associated with function calls entirely. https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/generated-code goes into a lot more depth about the execution model if you're interested.

As for languages other than Haskell without such an execution model (especially imperative languages), it's true that there can be the overhead you describe, which is why the vast majority of them use iterators to achieve the effect, which avoids the overhead. Rust (which has mapping/filtering, etc. as a pervasive part of its ecosystem) does this, for example, even though it's a systems programming language with a great deal of focus on performance.

As for the advantage, it's really about expressiveness and clarity of code, in addition to eliminating the bugs so often resulting from mutation.

[-] MareOfNights@discuss.tchncs.de 3 points 6 months ago

Interesting.

So it basically enables some more compiler magic. As an embedded guy I'll stay away from it, since I like my code being translated a bit more directly, but maybe I'll look into the generated code and see if I can apply some of the ideas for optimizations in the future.

[-] technom@programming.dev 8 points 6 months ago

I looked at the post again and they do talk about recursion for looping (my other reply talks about map over an iterator). Languages that use recursion for looping (like scheme) use an optimization trick called 'Tail Call Optimization' (TCO). The idea is that if the last operation in a function is a recursive call (call to itself), you can skip all the complexities of a regular function call - like pushing variables to the stack and creating a new stack frame. This way, recursion becomes as performant as iteration and avoids problems like stack overflow.

[-] aubeynarf@lemmynsfw.com 2 points 5 months ago

Not just calls to self - any time a functionโ€™s last operation is to call another function and return its result (a tail call), tail call elimination can convert it to a goto/jump.

load more comments (2 replies)
[-] krewjew@lemmy.world 32 points 6 months ago

Iโ€™ve been learning Haskell, and now I wonโ€™t shut up about Haskell

[-] affiliate@lemmy.world 2 points 6 months ago

what's the appeal of haskell? (this is a genuine question.) i've been a bit curious about it for a while but haven't really found the motivation to take a closer look at it.

[-] pkill@programming.dev 8 points 6 months ago* (last edited 6 months ago)

purely functional paradigm (immutable data structures and no shared state, which is great for e.g. concurrency) and advanced type system (for example you could have linear types that can be only used once). Lisps build on the premise that everything is data, leaving little room for bloated data structures or tight coupling with call chains that are hard to maintain or test. In Haskell on the other hand, everything is a computation, hence why writing it feels more like writing mathematical equations than computer programs somehow. It might, along Scala be good for data-driven applications.
Also the purely functional syntax means that on average, functional programming languages will arrive at the same solution in approx. 4 times less LOC than procedural/OO according to some research. Just look at solutions to competetive programming problems.
And even though I'm not a big fan of opinionated frameworks, compare some Phoenix codebase to a Symfony or even a Rails one to see how much cleaner the code is.

But if you're new to FP you should rather pick Scheme, Elixir or Clojure since the paradigm itself can be a little bit hard enough to wrap your head around at first (though Elixir and is a bit imperative, depends on how deep are you ready to dive in), not to mention having to learn about ADTs and category theory.

[-] krewjew@lemmy.world 2 points 6 months ago

My favorite feature is how currying is applied literally everywhere. You can take any function that accepts 2 args, pass in a single arg and return a new function that accepts one arg and produces the result. In Haskell, this is handled automatically. Once you wrap your head around using partially applied and fully saturated functions you can really start to see the power behind languages like Haskell

[-] CanadaPlus@lemmy.sdf.org 4 points 6 months ago* (last edited 6 months ago)

It's been noted that functional code accumulates less bugs, because there's no way to accidentally change something important somewhere else, and Haskell is the standard for functional languages. Also, it might just be me, but the type system also feels perfect when I use it. Like, my math intuition says there's no better way to describe a function; it's showing the logic to me directly.

Where Haskell is weak is when interactivity - either with the real world or with other components - comes up. You can do it, but it really feels like you're writing normal imperative code, and then just squirreling it away in a monad. It's also slower than the mid-level languages. That being said, if I need to quickly generate some data, Haskell is my no-questions go to. Usually I can do it in one or two lines.

[-] victorz@lemmy.world 1 points 6 months ago

Out of curiosity, what kind of data do you generate with Haskell? And would be willing to show an example of a one or two liner that generates the data? ๐Ÿ™

[-] CanadaPlus@lemmy.sdf.org 2 points 6 months ago* (last edited 6 months ago)

Uh, let's look at my GHCi history...

It looks like I was last searching for 12-member sets of permutations of 7 which come close to generating every possible permutation of seven elements, as well as meeting a few other criteria, for an electronics project. It ended up being more like 10 lines plus comments, though, ~~plus a big table generated by GAP, which I formatted into a Haskell list using probably a line of Haskell plus file loading.~~

Unfortunately for providing code, me playing with the finished algorithm has eaten up my whole 100 lines of history. So, here's a two-liner I posted on Lemmy before, that implements a feed-forward neural net. It's not exactly what you asked for, but it gives you an idea.

layer layerInput layerWeights = map relu $ map sum $ map (zipWith (*) layerInput) layerWeights

foldl layer modelInput modelWeights

In practice, you might also need to define relu in another line:

relu x = if x > 0 then x else 0

Edit: No wait, I think that was a different problem related to the same project. There's another module attached that generates all permutations of n items. After breaking it up so it's a bit less write-only:

allPermutations :: Int -> [[Int]]
allPermutations 1 = [[0]]
allPermutations n = concat $ map (addItem $ allPermutations (n-1) ) [0..(n-1)]

addItem :: [[Int]]  -> Int -> [[Int]]
addItem old new = map (\y -> new : map (fitAround new) y) old

fitAround :: Int -> Int -> Int
fitAround n y
	| y >= n	= y+1
	| otherwise	= y
[-] victorz@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

BTW: I think you need to put the "```" on separate lines.

test

test

Edit: huh, nope, that had no difference in effect for me. Wonder why your code doesn't render for me...

load more comments (4 replies)
load more comments (3 replies)
[-] victorz@lemmy.world 2 points 6 months ago

I learned some Haskell. Did some problems on Advent of Code and such. But since then I've heard about OCaml, which seems super interesting. Hopefully the tooling is simpler, but I've not had time to try anything yet.

Does anybody have any experience with it?

[-] owsei@programming.dev 1 points 6 months ago

Im pretty sure tsoding has some videos with it

[-] victorz@lemmy.world 2 points 6 months ago

I'll check it out, thank you very much! I approximate it a lot. ๐Ÿ™‚๐Ÿ™๐Ÿ‘

[-] CanadaPlus@lemmy.sdf.org 17 points 6 months ago* (last edited 6 months ago)

Unironically this. I know it's the same assuming there's no bugs (lol), but it's just faster to type and easier to read, at least to me.

[-] ByGourou@sh.itjust.works 10 points 6 months ago

I always found map more confusing than loop for some reason. Especially nested.

[-] CanadaPlus@lemmy.sdf.org 2 points 6 months ago

To each their own.

[-] MagosInformaticus@sopuli.xyz 13 points 6 months ago

Or sometimes fold them over trees of objects!

[-] magic_lobster_party@kbin.run 9 points 6 months ago

Objects? What is this OOP nonsense?

[-] sum_yung_gai@lemm.ee 5 points 6 months ago

Immutable in order to protect against parallel code changing the size of the iterable?

[-] Donkter@lemmy.world 9 points 6 months ago

Immutable because the only lists worth iterating over are the ones I define for myself.

[-] KindaABigDyl@programming.dev 3 points 6 months ago

#pragma omp parallel for

[-] TechNerdWizard42@lemmy.world 2 points 6 months ago

Ah yes the X86 instruction set for mapping.

Everything is a conditional branch loop. Always has been.

load more comments (2 replies)
load more comments
view more: next โ€บ
this post was submitted on 19 Apr 2024
513 points (98.1% liked)

Programmer Humor

19450 readers
538 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS