LazerDickMcCheese

joined 2 years ago
[–] LazerDickMcCheese@sh.itjust.works 65 points 6 days ago (1 children)

Friendly reminder that streaming services have negatively impacted artists and art cultivation. Headbanging while blackout drunk at a dive bar gig, without directly giving the band(s) a penny, would help them more than their semiannual Spotify payout

I've spent so much time testing RAM, using DISM, and scanning drive health...its nauseating. Considering the machine is good with the old GPU (which I want to rehousing in a different machine), I feel comfortable ruling out other peripherals (mouse, keyboard, audio interface). But correct me if I'm wrong here

I ran MSI Afterburner for a while too, forgot to mention that. Even under load, none of the components went over 30°. I stress tested the CPU and GPU for a long time just to see if it made the system more unstable, but it didn't seem to make a difference

No other USB issues

The motherboard is becoming my concern. Before I disassemble my computer again and buy yet another part, I'd like to make absolutely sure that this is the problem and not (for example) a simple setting change

Different cable as in HDMI? In that case no, and I don't have a different DP cable to try. I'm using the one that came with my display

 

I bought a Samsung Odyssey G9 and Nvidia 5070ti about a month ago. Monitor came first, works great. A week and a half later, I get the GPU. Case is 1mm too small for GPU, so I buy a new case (great start). Now my video (DisplayPort) and audio (USB to an interface) output drops, regardless of activity, then my fans ramp up. Then only solution I've found is a manual restart.

My PSU was almost a decade old anyway, so I bought a new one just to rule that out (again, great). Event Viewer and Reliability Monitor show kernel errors related to Nvidia drivers. So I switched from the standard driver to Studio to rule that out, same result. So I updated my mobo BIOS, same result. At that point I felt comfortable calling the GPU somewhat defective and returned it (had to pay for shipping, $171, again great). Replacement GPU arrived yesterday, and my outputs just dropped again and needed a manual restart. This time I'm not seeing any error codes however. I'm losing my mind, I really need help from a tech wizard.

Win11 Pro Gigabyte Z490 Pro AX Intel i9-10900F RAM 16GB (I think Vengeance) Nvidia 5070-ti Corsair RM1000X

Thanks for the info! I've heard very little about Fedora so I assumed it had a large learning curve. My only experience with Arch-based distros is whatever they put on Steamdecks, and my friend with one has had problems every time he's over

[–] LazerDickMcCheese@sh.itjust.works 0 points 2 weeks ago (2 children)

I'd consider it, got recommendations?

[–] LazerDickMcCheese@sh.itjust.works 4 points 2 weeks ago (6 children)

As far as my novice knowledge understands, this isn't a fixable "issue". But I'd love to use Debian as my main OS for everything, but I know there's gonna be issues with Steam/GOG games and GPU drivers. My patience and tolerance with "daily drivers" is much lower than my servers, so as far as I know that pretty much limits me to Mint (which isn't as cool)

[–] LazerDickMcCheese@sh.itjust.works 1 points 2 weeks ago (1 children)

I'm glad its working. When I tried Docker in Windows a few years ago, it was pure pain. So bad that I gave up and started learning Linux. If it's as simple as you suggest, that's great news for people getting into it

I spent several hours last night talking about FOSS projects and tech certifications to a guy in entry-level IT. I'm out here doing my best, guys

[–] LazerDickMcCheese@sh.itjust.works 44 points 1 month ago (5 children)

Fingers crossed for socials skills in FOSS communities, then it's game over for big tech

In many cases...yes, and it's embarrassing

Like you've already heard, you're unlikely to feel a difference across most distros. I'll recommend Debian or Ubuntu, I use both

[–] LazerDickMcCheese@sh.itjust.works 2 points 1 month ago (5 children)

I'm using Mullvad, its been great for me. I know it's a fork, I don't care

 

I followed YouTube videos and all my domain points to is "server not found." My domain is through Cloudflare. My server's ports have been opened at the router.

Proxy Host Settings: Domain name: newly.registered.domain Scheme: http (I've tried https too) Forward hostname/IP: local.server.ip.v4 Forward port: jelly_port Access list: Publicly accessible SSL: *.newly.registered.domain

I'd love to share my certificate info, but I don't see a way to do that...but I set up the DNS thingy with a Cloudflare API token. I remember typing in my server's public IP here too. Took many tries, but it finally accepted the settings as valid.

So what am I missing to get a reverse proxy? I thought it was supposed to work after all of that.

I've been trying to get this going for so long that it just feels like I'm beating my head against the wall until it randomly works, ya know?

 

This is my first real dive into hosting a server beyond a few Docker containers in my NAS. I've been learning a lot over the past 5 days, first thing I learned is that Proxmox isn't for me:

https://sh.itjust.works/post/49441546 https://sh.itjust.works/post/49272492 https://sh.itjust.works/post/49264890

So now I'm running headless Ubuntu and having a much better time! I migrated all of my Docker stuff to my new server, keeping my media on the NAS. I originally set up an NFS share (NAS->Server) so my Jellyfin container could snag the data. This worked at first, quickly crumbled without warning, and HWA may or may not be working.

Enter the Jellyfin issue: transcoded playback (and direct, doesn't matter) either give "fatal player error" or **extremely **slow, stuttery playback (basically unusable). Many Discord exchanges later, I added an SMB share (same source folder, same destination folder) to troubleshoot to no avail, and Jellyfin-specific problems have been ruled out.

After about 12hrs of 'sudo nano /etc/fstab' and 'dd if=/path/to/nfs_mount/testfile of=/dev/null bs=1M count=4096 status=progress', I've found some weird results from transferring the same 65GB file between different drives:

NAS's HDD (designated media drive) to NAS's SSD = 160MB/s NAS's SSD to Ubuntu's SSD = 160MB/s NAS's HDD to Ubuntu's SSD = .5MB/s

Both machines are cat7a ethernet straight to the router. I built the cables myself, tested them many times (including yesterday), and my reader says all cables involved are perfectly fine. I've rebooted them probably a fifty times by now.

NAS (Synology DS923+): -32GB RAM -Seagate EXOS X24 -Samsung SSD 990 EVO

Ubuntu: -Intel i5-13500 -Crucial DDR5-4800 2x32GB -WD SN850X NVMe

If you were tasked with troubleshooting a slow mount bind between these two machines, what would you do to improve the transfer speeds? Please note that I cannot SSH into the NAS, I just opened a ticket with Synology about it.

Here's the current /etc/fstab after extensive Q&A from different online communities

NFS mount: 192.168.0.4:/volume1/data /mnt/hermes nfs4 rw,nosuid,relatime,vers=4.1,rsize=13>

SMB mount: //192.168.0.4/data /mnt/hermes cifs username=_____,password=_______,vers=3.>

 

cross-posted from: https://sh.itjust.works/post/49393596

I've been running Jellyfin on a Synology DS923+ for a couple years with 'linuxserver/jellyfin:latest' with no issue until that big update recently. Suddenly it's borked...extremely slow speeds, failing to play files half the time, stuttering even when it does play. It was time for a hardware update regardless; it was a miracle that the NAS was able to run as many services on it as it was anyway.

So I built a Proxmox machine with the intent of adding hardware acceleration and transcoding (ideally I'd like to stream to a couple old CRTs): -ASRock B760M PRO RS -Intel i5-13500 -2x32GB Crucial DDR5-4800 -1TB WD SN850X NVMe

Using the Proxmox community Jellyfin script (https://community-scripts.github.io/ProxmoxVE/scripts?id=jellyfin&category=Media+%26+Streaming) I set up an LXC and the iGPU is supposedly being utilized properly. I added an NFS mount from the NAS's media folder to the Proxmox host, then bound the mount point to the LXC. So at this point, it is accessible to clients via web browser, but I'm having a few issues:

  1. (Probably a Prox issue but...) Jellyfin isn't seeing all the media. I added all the libraries and did a full scan, but *maybe *10% of the media is actually available. Hopefully this is a moot point because--

  2. My old docker config isn't available. I made an NFS mount from the NAS's docker folder to the Proxmox host and tried to route it to the LXC as well, but the Proxmox-NAS refuses to work so I'd need a workaround.

  3. I have no idea if my transcoding settings are right. Intel's specs for my CPU and Jellyfin's recommendations seems to conflict slightly, but between both sets of info there's still some settings that lack guidance. Basically, can someone with a computer engineering degree double check my settings? I tried a screenshot, but Lemmy didn't appreciate it

Hardware acceleration: Intel Quicksync (QSV) QSV Device: /dev/dri/renderD128

X H264

X HEVC

MPEG2

VC1

VP8

X VP9

X AV1

HEVC 10bit

VP9 10bit

HEVC RExt 8/10bit

HEVC RExt 12bit

X Prefer OS native DXVA or VA-API hardware decoders

X Enable hardware encoding

Enable Intel Low-Power H.264 hardware encoder

Enable Intel Low-Power HEVC hardware encoder

X Allow encoding in HEVC format

Allow encoding in AV1 format

Edit: forgot to include logs: "ffmpeg version 7.1.2-Jellyfin Copyright (c) 2000-2025 the FFmpeg developers built with gcc 13 (Ubuntu 13.3.0-6ubuntu2~24.04) configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc libavutil 59. 39.100 / 59. 39.100 libavcodec 61. 19.101 / 61. 19.101 libavformat 61. 7.100 / 61. 7.100 libavdevice 61. 3.100 / 61. 3.100 libavfilter 10. 4.100 / 10. 4.100 libswscale 8. 3.100 / 8. 3.100 libswresample 5. 3.100 / 5. 3.100 libpostproc 58. 3.100 / 58. 3.100 [AVHWDeviceContext @ 0x7ab87d07ffc0] No VA display found for device /dev/dri/renderD128. Device creation failed: -22. Failed to set value 'vaapi=va:/dev/dri/renderD128,driver=iHD' for option 'init_hw_device': Invalid argument Error parsing global options: Invalid argument"

"[WRN] The WebRootPath was not found: "/var/lib/jellyfin/wwwroot". Static files may be unavailable. [ERR] FFmpeg exited with code 234"

Edit: appreciate all the help!

 

I've been running Jellyfin on a Synology DS923+ for a couple years with 'linuxserver/jellyfin:latest' with no issue until that big update recently. Suddenly it's borked...extremely slow speeds, failing to play files half the time, stuttering even when it does play. It was time for a hardware update regardless; it was a miracle that the NAS was able to run as many services on it as it was anyway.

So I built a Proxmox machine with the intent of adding hardware acceleration and transcoding (ideally I'd like to stream to a couple old CRTs): -ASRock B760M PRO RS -Intel i5-13500 -2x32GB Crucial DDR5-4800 -1TB WD SN850X NVMe

Using the Proxmox community Jellyfin script (https://community-scripts.github.io/ProxmoxVE/scripts?id=jellyfin&category=Media+%26+Streaming) I set up an LXC and the iGPU is supposedly being utilized properly. I added an NFS mount from the NAS's media folder to the Proxmox host, then bound the mount point to the LXC. So at this point, it is accessible to clients via web browser, but I'm having a few issues:

  1. (Probably a Prox issue but...) Jellyfin isn't seeing all the media. I added all the libraries and did a full scan, but *maybe *10% of the media is actually available. Hopefully this is a moot point because--

  2. My old docker config isn't available. I made an NFS mount from the NAS's docker folder to the Proxmox host and tried to route it to the LXC as well, but the Proxmox-NAS refuses to work so I'd need a workaround.

  3. I have no idea if my transcoding settings are right. Intel's specs for my CPU and Jellyfin's recommendations seems to conflict slightly, but between both sets of info there's still some settings that lack guidance. Basically, can someone with a computer engineering degree double check my settings? I tried a screenshot, but Lemmy didn't appreciate it

Hardware acceleration: Intel Quicksync (QSV) QSV Device: /dev/dri/renderD128

X H264

X HEVC

MPEG2

VC1

VP8

X VP9

X AV1

HEVC 10bit

VP9 10bit

HEVC RExt 8/10bit

HEVC RExt 12bit

X Prefer OS native DXVA or VA-API hardware decoders

X Enable hardware encoding

Enable Intel Low-Power H.264 hardware encoder

Enable Intel Low-Power HEVC hardware encoder

X Allow encoding in HEVC format

Allow encoding in AV1 format

Edit: forgot to include logs: "ffmpeg version 7.1.2-Jellyfin Copyright (c) 2000-2025 the FFmpeg developers built with gcc 13 (Ubuntu 13.3.0-6ubuntu2~24.04) configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc libavutil 59. 39.100 / 59. 39.100 libavcodec 61. 19.101 / 61. 19.101 libavformat 61. 7.100 / 61. 7.100 libavdevice 61. 3.100 / 61. 3.100 libavfilter 10. 4.100 / 10. 4.100 libswscale 8. 3.100 / 8. 3.100 libswresample 5. 3.100 / 5. 3.100 libpostproc 58. 3.100 / 58. 3.100 [AVHWDeviceContext @ 0x7ab87d07ffc0] No VA display found for device /dev/dri/renderD128. Device creation failed: -22. Failed to set value 'vaapi=va:/dev/dri/renderD128,driver=iHD' for option 'init_hw_device': Invalid argument Error parsing global options: Invalid argument"

"[WRN] The WebRootPath was not found: "/var/lib/jellyfin/wwwroot". Static files may be unavailable. [ERR] FFmpeg exited with code 234"

 

Fresh Proxmox install, having a dreadful time. Trying not to be dramatic, but this is much worse than I imagined. I'm trying to migrate services from my NAS (currently docker) to this machine.

How should Jellyfin be set up, lxc or vm? I don't have a preference, but I do plan on using several docker containers (assuming I can get this working within 28 days) in case that makes a difference. I tried WunderTech's setup guide which used an lxc for docker containers and a separate lxc of jellyfin. However that guide isn't working for me: curl doesn't work on my machine, most install scripts don't work, nano edits crash, and mounts are inconsistent.

My Synology NAS is mounted to the host, but making mount points to the lxc doesn't actually connect data. For example, if my NAS's media is in /data/media/movies or /data/media/shows and the host's SMB mount is /data/, choosing the lxc mount point /data/media should work, right?

Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano? When I tried to make suggested edits, the lxc freezes for over 30 minutes and seemingly nothing happens as the edits don't persist.

Any suggestions for resource allocation? I've been looking for guides or a formula to follow for what to provide an lxc or VM to no avail.

If you suggest command lines, please keep them simple as I have to manually type them in.

Here's the hardware: Intel i5-13500 64GB Crucial DR5-4800 ASRock B760M Pro RS 1TB WD SN850X NVMe

 

I'm assuming this isn't normal behavior, but copying and pasting commands into shell windows (host, VMs, LXCs, doesn't matter) doesn't work. I've noticed issues with curl too, despite saying it's installed and up-to-date, but one thing at a time...I'm also not convinced that edits made to conf files are persisting as a result. Is this a browser issue? As always, thanks for helping out a normie in need.

Edit: it's taking at least 20min for a simple conf edit to save. I have to assume that's abnormal too, running a i5-13500 by the way...confirmed, not saving conf edits

17
submitted 3 months ago* (last edited 3 months ago) by LazerDickMcCheese@sh.itjust.works to c/linux@lemmy.ml
 

Hello, not much of a Linux user (situations like this are why)...but long story short, I'm trying to rehab a ROG PC from 2018.

I made a bootable USB of the current Mint distro, but booting leads to a black screen. I tried compatibility mode, but the boot process hangs on "EFI stub: Measured initrd data into PCR 9"

The PC came with an Nvidia 2080, but it's actually a 980ti. Also there isn't integrated graphics here. Any troubleshooting advice would be cool

Update: if I select recovery mode then 'resume normal boot', Mint 21 works. However, this computer will be a gift to a tech-illiterate person, so that level of input will not suffice. I installed the recommended (and correct) Nvidia driver, but the results are the same

 

Great news! I started my selfhost journey over a year ago, and I'm finding myself needing better hardware. There's so many services I want that my NAS can't handle. And I unfortunately need to add GPU transcoding to my Jellyfin setup.

What's the best OS for a machine focused on containers and (getting started with) VMs? I've heard Proxmox

What CPU specs should I be concerned about?

I'm willing to buy a pre-built as long as its hardware has sufficient longevity.

 

I see the GRUB menu, then it goes to an inactive black screen. If I select recovery then resume, it works fine. As this is supposed to be a remote machine, the problem defeats the purpose. I've heard this is usually a GPU drivers issue, so I followed the suggestions here: https://documentation.ubuntu.com/server/how-to/graphics/install-nvidia-drivers/index.html

and here (I'm running 22.04 and can't update, separate issue though): https://askubuntu.com/questions/760934/graphics-issues-after-while-installing-ubuntu-16-04-16-10-with-nvidia-graphics

Yet I still have the problem with a black screen. While I'd like it to "just work", I'm also open to extreme measures including...

-removing the GPU (assuming that would help) -having a script run that auto-selects recovery then resume then logs in on my behalf (I'd need help figuring that out though)

I also updated the grub file after adding "nomodeset", that didn't fix it either.

 

For the uninitiated, this is software for music and it's notoriously complicated. I have a paid version from about a decade ago and I'm not giving them anymore of my money. Reddit used to have a vsttorrents guide for this, but it's been forcibly removed. I'm trying to get Komplete 15 Ultimate, with all the added stuff I'll probably never even look at

Edit: if anyone sees this, I'm still looking

 

I would love to seed (and cross-seed) my music library, but metadata tagging and renaming fucks the files up. How do I set up qBittorrent and Prowlarr to keep seeding after retagging?

view more: next ›