this post was submitted on 25 Mar 2026
16 points (100.0% liked)

Linux

64170 readers
450 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS
 

There is a webcomic called strong female protagonist that i want to persevere(in case the website is ever lost) but not sure how.

The image you see above is not a webpage of the site but rather a drop-down like menu. There is a web crawler called WFDownloader(that i am using the window's exe file inside bottles)that can grab images and can follow links, grab images "N" number of pages down but since this a drop-down menu i am not sure it will work

There also the issue of organizing the images. WFDownloader doesn't have options for organizing.

What i am thinking about, is somehow translating the html for the drop-down menu into separate xml file based on issues/titles, run a script to download the images, have each image named after its own hyperlink and have each issue in its own folder. Later on i can create a stitch-up version of the each issues.

top 16 comments
sorted by: hot top controversial new old
[–] Archr@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

Usually when I need to do something like this I use python and BeautifulSoup4. You basically get the content of the web page and use bs4 to parse it and pull out the correct link. You will need to look at the source of the page to understand their page format.

If python requests isn't able to get the right data then you might need to use selenium to use a full web browser to render the page and run any Javascript that might populate the page. Then you send that page content to bs4.

Edit: I know someone posted a link to archive but I figured some instructions would also be useful.

[–] harmbugler@piefed.social 11 points 4 days ago* (last edited 4 days ago) (2 children)

https://archive.org/details/Strong-Female-Protagonist-webcomic-online

You can download it as a 320MB zip, but maybe best to torrent it and seed for others.

[–] brucethemoose@lemmy.world 8 points 4 days ago* (last edited 4 days ago)

The Internet Archive is a treasure.

It's going to hurt when they annoy the wrong person and get sued out of existance.

[–] cactus_head@programming.dev 1 points 4 days ago (1 children)

Thanks you ❤️, but my country doesn't allow any outgoing connections

[–] Harmonics041@feddit.uk 0 points 3 days ago (1 children)

What country is this if you are safe to say? I haven't heard of anything like that before

[–] cactus_head@programming.dev 1 points 3 days ago

sorry i meant uploads and port-forwarding(This is in Egypt). Tried to seed for MyAnonymouse(a private tracker for books) but couldn't

[–] shrek_is_love@lemmy.ml 4 points 4 days ago (1 children)

If you view the source of the homepage, you'll see some HTML that starts with this:

<div class="archive-dropdown-wrap">
    <ul class="archive-dropdown">                       
        <li><span class="chapter-label">Issue 1</span><ul><li><a href="https://strongfemaleprotagonist.com/issue-1/page-0/">Cover

That's the HTML for the drop-down. Although if I were you, I'd look into taking advantage of WordPress' JSON API, since that website uses WordPress.

For example, here's a list of the images uploaded to the site in JSON format: https://strongfemaleprotagonist.com/wp-json/wp/v2/media?per_page=100&page=1 (Limited to 100 entries per page)

[–] cactus_head@programming.dev 3 points 4 days ago

I can see the url for each image when opening when the json link(source_url:) and each image is labeled in the correct order as far as i can see but how do i grab the urls?

Maybe look into awk language compile a list of urls than pass them through curl?

[–] MonkderVierte@lemmy.zip 1 points 3 days ago* (last edited 3 days ago)

If nothing else helps, rightclick, download, next. Then zip the directory and rename .zip to .cbz. Boom, comicbook.

[–] LaSirena@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

If you're just after the comics themselves, then look at dosage. It looks to support this web comic.

https://github.com/webcomics/dosage/

[–] eldavi@lemmy.ml 1 points 4 days ago (1 children)

i'm presuming that you've tried something like curl or wget wrapped in a for loop to iterate through each page to do this and that it didn't work somehow.

[–] ergonomic_importer@piefed.ca 1 points 4 days ago (1 children)

robots.txt would probably put a stop to that

[–] eldavi@lemmy.ml 2 points 4 days ago (1 children)

i was asking op if they used a manual approach, which wouldn't be impacted by something like robots.txt

[–] cactus_head@programming.dev 1 points 4 days ago* (last edited 4 days ago)

Haven't used curl or wget,have yet to start using command-line(outside of solving some linux issue or organizing family photos) but open to learning.

[–] doodoo_wizard@lemmy.ml -2 points 3 days ago (1 children)

wget -a website.url

It’s probably better not to try to preserve webcomics. It’s like the only bad form of art.

[–] cactus_head@programming.dev 2 points 3 days ago

It’s probably better not to try to preserve webcomics. It’s like the only bad form of art.

That a bizzare take, why? From the sound of it you dont hate comics in general, what about them being on the web makes them "bad"?

Anyways, that honour goes to any podcast by any who calls themeself a comedian.