nymnympseudonym

joined 3 weeks ago
[–] nymnympseudonym@piefed.social 3 points 6 days ago (2 children)

Any favorites? What do you think about state space models?

[–] nymnympseudonym@piefed.social 14 points 1 week ago (1 children)

And props to the dedicated folks still running Invidious instances like https://yewtu.be/

TBH I chose Funkwhale for my solution because it looked easy and out of the box, I just add a single Docker and subdomain to my existing site.

It wound up being more or less what you describe.

I may well follow OP's guide and nuke my Funkwhale despite the work I put into it and the fact that it does basically work for its intended purpose

[–] nymnympseudonym@piefed.social 7 points 1 week ago* (last edited 1 week ago) (4 children)

Question: did you consider Funkwhale , and if so why did you choose this other stack instead?

EDIT: fix link sry

[–] nymnympseudonym@piefed.social 2 points 1 week ago (4 children)

you're gonna have to get ads

I don't think that necessarily follows.

Have you heard of self-hosted Patreon-similar Ghost ?

距离让美国再次伟大仅剩四分之一英寸

What's crazy is the conservatives have been hoarding Gold and waiting for the collapse of the Dollar for decades. Now they are in charge, they are finally bringing about the needless calamity they were hunkering down for.

JFC I don't want to trade Gold for potatoes, I want Instacart to my house

Bananas Republic comes to mind

[–] nymnympseudonym@piefed.social 114 points 1 week ago (17 children)

Fuck GOOG for normalizing surveillance capitalism
Fuck YouTube in particular for making it basically impossible to usefully host an Invidious proxy any more and for their algorithmic manipulation

PeerTube is the Way

[–] nymnympseudonym@piefed.social 0 points 2 weeks ago* (last edited 2 weeks ago)

Perplexity prompt: "quotes from openai about safety testing for gpt-oss"

Reply below:


Safety Testing Measures

  • OpenAI’s safety process for open-weight release included technical audits (robustness, bias detection), structured adversarial testing, and evaluation of prompt-injection and jailbreak vulnerabilities.
  • The organization also employed both automated and human review to test for outputs that could cause harm, leveraging cross-disciplinary teams including outside researchers, ethicists, and cybersecurity experts.
  • Feedback from these rounds led to incremental model improvements before the open weights went public.

Transparency and External Collaboration

  • OpenAI has collaborated with third-party security and ethics researchers to validate its safety protocols and stress-test new models prior to release.
  • The company acknowledges that “Releasing open weights is a significant responsibility due to risks of misuse. We want to be transparent about our process and invite the AI community to help report and address issues that may arise post-release.”
[–] nymnympseudonym@piefed.social 8 points 2 weeks ago (4 children)

maybe our averages are different

view more: ‹ prev next ›