this post was submitted on 24 Jul 2025
43 points (100.0% liked)

podcasts

20112 readers
22 users here now

Podcast recommendations, episode discussions, and struggle sessions about which shows need to be cancelled.

Rest In Power, Michael Brooks.

founded 5 years ago
MODERATORS
 

Since my last post about https://piratefeeds.net/ I've added a lot of cool new feeds to the site. I've gotta hand it to reddit-logo, they really came through with the donations. We now have a bunch of high demand feeds: Chapo, TrueAnon, Trashfuture, Trillbilly, and a dozen more!

I'm still hoping for new feed donations though, the more the merrier. In particular I'd love to have feeds for:

  • ~~Citations Needed~~
  • ~~Lions Led By Donkeys~~
  • ~~Radio War Nerd~~
  • ~~This Machine Kills~~
  • Glue Factory
  • ~~Bungacast~~
  • The Worst Of All Possible Worlds
  • ~~Boonta Vista~~
  • Bad Hasbara
  • ~~Blank Check~~
  • ~~Bad Faith~~
  • Ten Thousand Posts
  • The Antifada
  • Your Kickstarter Sucks
  • Varn Vlog
  • This Is Revolution
  • Diet Soap
  • We're Not So Different
  • Cosmonaut Magazine
  • If Books Could Kill

Also, once again, duplicate feeds are still more than welcome as backups.

The people will be eternally grateful to donors for their service!

NOTE: some users apparently can't resolve the domain. Best guess is some ISPs/DNS servers are blocking the site as part of anti-piracy filters. If that's your case try setting your DNS server to a big one like Cloudflare's 1.1.1.1 or Google's 8.8.8.8. Alternatively using a VPN seems to fix it.

NOTE: some of the feeds have been reverted to the free versions. Seems like Patreon detected something was wrong with them. I've paused fetching the feeds for now, while I figure out how to stop them being detected in the future. In the meantime https://jumble.top/ is back online.

nerd stuff


Latest version of the feed fetching script:

import json
import random
import sys
import xml.etree.ElementTree as ElementTree

import requests

if __name__ == "__main__":
    # Feeds file expected JSON format:
    # [
    #    {
    #       "name": "...",
    #       "inactive": false,     # optional, if true the feed won't be fetched assuming the cached version won't change
    #       "description": "...",  # optional, if missing the original one will be kept
    #       "urls": [ 
    #           {
    #               "source": "...",
    #               "url": "..."
    #           }, ...
    #       ]
    #    }, ...
    # ]
    feeds_file = sys.argv[1]
    output_dir = sys.argv[2]

    print("\n#### Fetching feeds...")
    with open(feeds_file, newline='') as file:
        feeds = json.load(file)
        print("Loaded feeds file")

        for feed in feeds:
            # Do not fetch inactive feeds, kept as archives
            if 'inactive' in feed and feed['inactive'] is True:
                print(f"## Skipping inactive feed {feed['name']}")
            else:
                print(f"## Processing {feed['name']}...")
                sources = list(enumerate(feed['sources']))
                # Shuffle the URLs so we don't always pick the first one
                if len(sources) > 1:
                    random.shuffle(sources)

                response = None
                headers = {'User-Agent': 'AntennaPod/3.7.0'}
                # Try fetching the feed with each of the available URLs
                for i, source in sources:
                    print(f"Attempting to fetch {feed['name']} from source #{i}...")
                    url = source['url']
                    try:
                        response = requests.get(url, headers=headers)
                        if response.status_code == 200:
                            print(f"Fetched {feed['name']}")
                            break
                        else:
                            print(
                                f"ERROR: {feed['name']} URL #{i} returned error: {response.status_code} {response.content}")
                    except Exception as e:
                        print(f"ERROR: network error while fetching {feed['name']} with URL #{i}: ", e)
                if response is None or response.status_code != 200:
                    print(f"ERROR: failed to fetch {feed['name']}! No URLS worked")
                    continue

                try:
                    root = ElementTree.fromstring(response.content)
                    # Replace the description since it often contains PII
                    if 'description' in feed:
                        root[0].find('description').text = feed['description']

                    ElementTree.ElementTree(root).write(f"{output_dir}/{feed['name']}.xml")
                    print(f"Processed and saved {feed['name']}")
                except Exception as e:
                    print(f"ERROR: failed to process feed {feed['name']}:", e)

you are viewing a single comment's thread
view the rest of the comments
[–] tompom@hexbear.net 2 points 4 days ago (3 children)

A suggestions to make webpage more ergonomic. Two columns. Numerated entries. Time and date of last update. Different colors of newly added podcasts.

[–] 21Gramsci@hexbear.net 1 points 4 days ago (2 children)

Genuinely thank you for the feedback! My first thoughts:

  • Two columns wouldn't work on mobile. I could serve different pages based on device but I'm lazy and that seems like a lot of work right now.
  • Numerated entries are a good idea, but the numbering would change every time I add/remove feeds if I want to stick to alphabetical ordering, which kinda defeats the purpose.
  • For last update do you mean last time the feeds were fetched or last time I modified the index page?
  • New podcast highlighting would be nice indeed, but it would probably have to be manual.

My first general problem is that right now I'm serving a static page with a reverse proxy, so it's difficult to implement backend logic. The second problem is that I

  • am not a frontend dev
  • am bad at frontend dev
  • hate frontend dev.

If someone has suggestions on how I could implement stuff like this without moving to a web app with a backend I'm all ears.

[–] tompom@hexbear.net 1 points 4 days ago (1 children)

For last update do you mean last time the feeds were fetched or last time I modified the index page? the later

[–] 21Gramsci@hexbear.net 1 points 4 days ago

Ok yeah, I can add that