datahoarder
Who are we?
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
We are one. We are legion. And we're trying really hard not to forget.
-- 5-4-3-2-1-bang from this thread
view the rest of the comments
www . acadboost . com/courses/11th-JEE-MainAdvanced-Notes
ok, i figured out how to download :) i used firefox devtool, might be slightly different on chromium.
after going to a pdf page (like this), open devtools on the network tab, select XHR, filter with the string
/preview/urland refresh. you'll get one item that contains 'url' and 'p'. as you also experienced the pdf is password protected.now they have a JS function defined named
parseJData, you can use it likeparseJData(p, !0)where p is the p value from the xhr response e.g.parseJData("9dd1bbb2b96776b603b2666fb3173133x8Y+a7Fx0tdy2ntJSUCmLFQQW+BMJFz+UGUrdSyaNz2FpFx2fSJvzEJ8JdWXGbeH16ac82d92bc66da09f044fe9faebaaa9", !0). That's your pdf password.you totally can automate this, but there doesn't seem to be that many PDFs (if you're only going for that one lecture). I'd just keep the devtool open, check "persist logs" (click option button to find it), browse through all the PDF pages, and save as HAR file and write some one off script to extract all the url and p value.
Which part is password
Also there is a
allowDownloadHow to make it true ?Why there are 2 sometimes 3 .pdf in network tab when there is only 1 pdf on page ?
sent you a pm, hope it helps
thanks aton