nymnympseudonym

joined 1 week ago
[–] nymnympseudonym@piefed.social 14 points 1 day ago (1 children)
 

And props to the dedicated folks still running Invidious instances like https://yewtu.be/

TBH I chose Funkwhale for my solution because it looked easy and out of the box, I just add a single Docker and subdomain to my existing site.

It wound up being more or less what you describe.

I may well follow OP's guide and nuke my Funkwhale despite the work I put into it and the fact that it does basically work for its intended purpose

[–] nymnympseudonym@piefed.social 7 points 1 day ago* (last edited 1 day ago) (4 children)

Question: did you consider Funkwhale , and if so why did you choose this other stack instead?

EDIT: fix link sry

[–] nymnympseudonym@piefed.social 2 points 2 days ago (4 children)

you're gonna have to get ads

I don't think that necessarily follows.

Have you heard of self-hosted Patreon-similar Ghost ?

距离让美国再次伟大仅剩四分之一英寸

What's crazy is the conservatives have been hoarding Gold and waiting for the collapse of the Dollar for decades. Now they are in charge, they are finally bringing about the needless calamity they were hunkering down for.

JFC I don't want to trade Gold for potatoes, I want Instacart to my house

Bananas Republic comes to mind

[–] nymnympseudonym@piefed.social 113 points 2 days ago (17 children)

Fuck GOOG for normalizing surveillance capitalism
Fuck YouTube in particular for making it basically impossible to usefully host an Invidious proxy any more and for their algorithmic manipulation

PeerTube is the Way

[–] nymnympseudonym@piefed.social 0 points 1 week ago* (last edited 1 week ago)

Perplexity prompt: "quotes from openai about safety testing for gpt-oss"

Reply below:


Safety Testing Measures

  • OpenAI’s safety process for open-weight release included technical audits (robustness, bias detection), structured adversarial testing, and evaluation of prompt-injection and jailbreak vulnerabilities.
  • The organization also employed both automated and human review to test for outputs that could cause harm, leveraging cross-disciplinary teams including outside researchers, ethicists, and cybersecurity experts.
  • Feedback from these rounds led to incremental model improvements before the open weights went public.

Transparency and External Collaboration

  • OpenAI has collaborated with third-party security and ethics researchers to validate its safety protocols and stress-test new models prior to release.
  • The company acknowledges that “Releasing open weights is a significant responsibility due to risks of misuse. We want to be transparent about our process and invite the AI community to help report and address issues that may arise post-release.”
[–] nymnympseudonym@piefed.social 8 points 1 week ago (4 children)

maybe our averages are different

[–] nymnympseudonym@piefed.social 10 points 1 week ago* (last edited 1 week ago) (6 children)

As a person who has been managing software development teams for 30+ years, I have an observation.
Invariably, some employees are "average". Not super geniuses, not workaholics, but people who (say) have been doing a good job with customer support. Generally they can code simple things and know the OS versions we support as a power user -- but not as well as a sysadmin.

I do find that if I tell them to use ChatGPT to help debug issues, they do almost as well as if a sysadmin or more experienced programmer had picked up the ticket. It gets better troubleshooting, they maybe fix an actual root cause bug in our product code.

view more: next ›