this post was submitted on 03 Dec 2025
734 points (94.3% liked)
Fediverse memes
2332 readers
532 users here now
Memes about the Fediverse.
Rules
General
- Be respectful
- Post on topic
- No bigotry or hate speech
- Memes should not be personal attacks towards other users
Specific
- We are not YPTB. If you have a problem with the way an instance or community is run, then take it up over at !yepowertrippinbastards@lemmy.dbzer0.com.
- Addendum: Yes we know that you think ml/hexbear/grad are tankies and or .world are a bunch of liberals but it gets old quickly. Try and come up with new material.
Elsewhere in the Fediverse
Other relevant communities:
- !fediverse@lemmy.world
- !yepowertrippinbastards@lemmy.dbzer0.com
- !lemmydrama@lemmy.world
- !fediverselore@lemmy.ca
- !bestofthefediverse@lemmy.ca
- !fedigrow@lemmy.zip
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No, python can be incredibly fast for IO when scaled properly.
You generally don't run a single process or even program for serving websites. There are task queues, callbacks, microservices etc so the bottleneck is almost never the programming language itself but the tooling and Python's tooling for web is still miles ahead. Thats why big project ship more Django than Rust and all AI training is running on Python not Rust etc.
Don't get me wrong Rust is a brilliant language but Python can often be better.
Finally you can outsource high performance tasks to Rust or C from within Python rather easily these days.
Python is an interpreted language, which is fundamentally always slower than a compiled language like Rust. However the main performance bottleneck are actually sql queries, and I believe we make a lot more effort to optimize them compared to Piefed.
That makes sense to me logically. Are there advanced caching techniques being deployed? I’m really curious about this.
Not sure what you mean by "advanced caching". There is some basic caching for data which rarely or never changes, for example most things in
/api/v3/site. But other data like post listings change a lot, so caching is not an option and instead its a matter of optimizing the sql queries (using the right indexes, reducing the number of joins, making benchmarks and looking at query plans).Here is an issue on this topic: https://github.com/LemmyNet/lemmy/issues/5555
I thought the biggest problem for Python would be the GIL as it cannot share memory between processes and therefore needs to do use a database or other tool to share between them. Though in hindsight most web related services probably use databases to read and write data and this do not work out of shared process memory.
Threading from a single process is just a bad scaling strategy anyway so GIL is rarely an issue so you're right most big web stuff does indeed use a database/queue/cache layer for orchestrating multiple processes.