Yeah, that's also something I found oddly missing (i.e. that replacing crypto systems world wide, if it becomes necessary, will take a very long time).
nightsky
Some people act as if “I can’t run it now therefore it’s garbage” which is just such a nonsense approach to any kind of theoretical work.
Agreed -- and I hope my parent post, where I said the presentation is interesting, was not interpreted as thinking that way. In a sibling post I pointed out the theme in there which I found insightful, but I certainly didn't want to imply that theoretical work, even when purely theoretical, is bad or worthless.
Before clicking the link I thought you were going for aluminium, i.e. a variation of
Wow this is some real science, they even have graphs.
Thank you!
Oh wow, thank you for taking the time! :)
Just one question:
None of the other assorted proposals (loop quantum gravity, asymptotic safety, …) got lucky like that.
Is this because the alternate proposals appeared unpromising, or have they simply not been explored enough yet?
Thanks for adding the extra context! As I said, I don't have the necessary level of knowledge in physics (and also in cryptography) to have an informed opinion on these matters, so this is helpful. (I've wanted to get deeper in both topics for a long time, but life and everything has so far not allowed for it.)
About your last paragraph, do you by chance have any interesting links on "criticism of the criticism of string theory"? I wonder, because I have heard the argument "string theory is non-falsifiable and weird, but it's pushed over competing theories by entrenched people" several times already over the years. Now I wonder, is that actually a serious position or just conspiracy/crank stuff?
Comparing quantum computing to time machines or faster-than-light travel is unfair.
I didn't interpret the slides as an attack on quantum computing per se, but rather an attack on over-enthusiastic assertions of its near-future implications. If the likelihood of near-future QC breaking real-world cryptography is so extremely low, it's IMO okay to make a point by comparing it to things which are (probably) impossible. It's an exaggeration of course, and as you point out the analogy isn't correct in that way, but I still think it makes a good point.
What I find insightful about the comparison is that it puts the finger on a particular brain worm of the tech world: the unshakeable belief that every technical development will grow exponentially in its capabilities. So as soon as the most basic version of something is possible, it is believed that the most advanced forms of it will follow soon after. I think this belief was created because it's what actually happened with semiconductors, and of course the bold (in its day) prediction that was Moore's law, and then later again, the growth of the internet.
And now this thinking is applied to everything all the time, including quantum computers (and, as I pointed to in my earlier post, AI), driven by hype, by FOMO, by the fear of "this time I don't want to be among those who didn't recognize it early". But there is no inherent reason why a development should necessarily follow such a trajectory. That doesn't mean of course that it's impossible or won't get there eventually, just that it may take much more time.
So in that line of thought, I think it's ok to say "hey look everyone, we have very real actual problems in cryptography that need solving right now, and on the other hand here's the actual state and development of QC which you're all worrying about, but that stuff is so far away you might just as well worry about time machines, so please let's focus more on the actual problems of today." (that's at least how I interpret the presentation).
Interesting slides: Peter Gutmann - Why Quantum Cryptanalysis is Bollocks
Since quantum computers are far outside my expertise, I didn't realize how far-fetched it currently is to factor large numbers with quantum computers. I already knew it's not near-future stuff for practical attacks on e.g. real-world RSA keys, but I didn't know it's still that theoretical. (Although of course I lack the knowledge to assess whether that presentation is correct in its claims.)
But also, while reading it, I kept thinking how many of the broader points it makes also apply to the AI hype... (for example, the unfounded belief that game-changing breakthroughs will happen soon).
Fun question to think about: when was the last time HP made a product that you would actually recommend to anyone?
A few weeks ago there was news of this "human authored" certification for books. Newspapers and many other things should think about something like that, too. If that existed, it would certainly be something I would look for when deciding whether to subscribe to something or not...
The AI guys are really playing with the exact same cheat every time, aren't they? Thanks to pivot-to-ai for continuing to shine a light on this... I hope the wider press eventually learns about it, too.