this post was submitted on 24 Jul 2025
43 points (100.0% liked)

podcasts

20112 readers
10 users here now

Podcast recommendations, episode discussions, and struggle sessions about which shows need to be cancelled.

Rest In Power, Michael Brooks.

founded 5 years ago
MODERATORS
 

Since my last post about https://piratefeeds.net/ I've added a lot of cool new feeds to the site. I've gotta hand it to reddit-logo, they really came through with the donations. We now have a bunch of high demand feeds: Chapo, TrueAnon, Trashfuture, Trillbilly, and a dozen more!

I'm still hoping for new feed donations though, the more the merrier. In particular I'd love to have feeds for:

  • ~~Citations Needed~~
  • ~~Lions Led By Donkeys~~
  • ~~Radio War Nerd~~
  • ~~This Machine Kills~~
  • Glue Factory
  • ~~Bungacast~~
  • The Worst Of All Possible Worlds
  • ~~Boonta Vista~~
  • Bad Hasbara
  • ~~Blank Check~~
  • ~~Bad Faith~~
  • Ten Thousand Posts
  • The Antifada
  • Your Kickstarter Sucks
  • Varn Vlog
  • This Is Revolution
  • Diet Soap
  • We're Not So Different
  • Cosmonaut Magazine
  • If Books Could Kill

Also, once again, duplicate feeds are still more than welcome as backups.

The people will be eternally grateful to donors for their service!

NOTE: some users apparently can't resolve the domain. Best guess is some ISPs/DNS servers are blocking the site as part of anti-piracy filters. If that's your case try setting your DNS server to a big one like Cloudflare's 1.1.1.1 or Google's 8.8.8.8. Alternatively using a VPN seems to fix it.

NOTE: some of the feeds have been reverted to the free versions. Seems like Patreon detected something was wrong with them. I've paused fetching the feeds for now, while I figure out how to stop them being detected in the future. In the meantime https://jumble.top/ is back online.

nerd stuff


Latest version of the feed fetching script:

import json
import random
import sys
import xml.etree.ElementTree as ElementTree

import requests

if __name__ == "__main__":
    # Feeds file expected JSON format:
    # [
    #    {
    #       "name": "...",
    #       "inactive": false,     # optional, if true the feed won't be fetched assuming the cached version won't change
    #       "description": "...",  # optional, if missing the original one will be kept
    #       "urls": [ 
    #           {
    #               "source": "...",
    #               "url": "..."
    #           }, ...
    #       ]
    #    }, ...
    # ]
    feeds_file = sys.argv[1]
    output_dir = sys.argv[2]

    print("\n#### Fetching feeds...")
    with open(feeds_file, newline='') as file:
        feeds = json.load(file)
        print("Loaded feeds file")

        for feed in feeds:
            # Do not fetch inactive feeds, kept as archives
            if 'inactive' in feed and feed['inactive'] is True:
                print(f"## Skipping inactive feed {feed['name']}")
            else:
                print(f"## Processing {feed['name']}...")
                sources = list(enumerate(feed['sources']))
                # Shuffle the URLs so we don't always pick the first one
                if len(sources) > 1:
                    random.shuffle(sources)

                response = None
                headers = {'User-Agent': 'AntennaPod/3.7.0'}
                # Try fetching the feed with each of the available URLs
                for i, source in sources:
                    print(f"Attempting to fetch {feed['name']} from source #{i}...")
                    url = source['url']
                    try:
                        response = requests.get(url, headers=headers)
                        if response.status_code == 200:
                            print(f"Fetched {feed['name']}")
                            break
                        else:
                            print(
                                f"ERROR: {feed['name']} URL #{i} returned error: {response.status_code} {response.content}")
                    except Exception as e:
                        print(f"ERROR: network error while fetching {feed['name']} with URL #{i}: ", e)
                if response is None or response.status_code != 200:
                    print(f"ERROR: failed to fetch {feed['name']}! No URLS worked")
                    continue

                try:
                    root = ElementTree.fromstring(response.content)
                    # Replace the description since it often contains PII
                    if 'description' in feed:
                        root[0].find('description').text = feed['description']

                    ElementTree.ElementTree(root).write(f"{output_dir}/{feed['name']}.xml")
                    print(f"Processed and saved {feed['name']}")
                except Exception as e:
                    print(f"ERROR: failed to process feed {feed['name']}:", e)

you are viewing a single comment's thread
view the rest of the comments
[–] tompom@hexbear.net 1 points 1 week ago (4 children)

Any hope to resolve the american prestige feed problem?

[–] 21Gramsci@hexbear.net 3 points 1 week ago (3 children)

I'm afraid not, unless it turns out they publish a premium feed on a different platform that's less strict about feed sharing.

To get the SupportingCasts feed to work I would probably need to implement proxying and caching for the media files, which would require:

  • a bunch of time and effort that I don't really want to put into it right now
  • extra costs for additional server bandwidth, which I don't wanna pay for myself on top of the current costs
  • a throwaway premium subscription that I can play around with, without fear of getting someone's account banned.

If somebody wants to sponsor that work I might consider it, but otherwise I don't plan on working on that right now.

[–] tompom@hexbear.net 1 points 1 week ago* (last edited 1 week ago) (1 children)

What about periodically dumping the files on https://kemono.cr/.

[–] 21Gramsci@hexbear.net 2 points 1 week ago (1 children)

I don't touch the media files and I'd like to keep it that way for now. I've already explained why on a similar feature request.

[–] tompom@hexbear.net 1 points 1 week ago

I understand. I thought maybe you were a AP listener so...

load more comments (1 replies)
load more comments (1 replies)