GeekyOnion

joined 2 years ago
[–] GeekyOnion@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

Behold: The Fire Turd (what happens after forgetting that spicy food has a two-burn cycle)

Not to be confused for the Burning Bush (what happens when there may be an undiagnosed STI)

[–] GeekyOnion@lemmy.world 1 points 2 days ago

Just a home lab for fun and experimenting.

[–] GeekyOnion@lemmy.world 2 points 2 days ago

Excellent suggestion! Thank you!

[–] GeekyOnion@lemmy.world 1 points 2 days ago

Thanks! I didn't even think about running a local app, but this may be a fun find to experiment with!

[–] GeekyOnion@lemmy.world 2 points 2 days ago

That's a great idea! Thanks! I've got unbound running locally on one instance of Pihole, and I've got it in a LXC for the other instance. Using the configs to pull from git would make that much easier to sync.

[–] GeekyOnion@lemmy.world 3 points 2 days ago

Thanks! I'll take a look at those!

[–] GeekyOnion@lemmy.world 2 points 2 days ago

Huh. Good tip! I'll have to test this out.

[–] GeekyOnion@lemmy.world 19 points 2 days ago (4 children)

I have a "main" Pihole on a Raspberry Pi, and I set up another instance in a VM for secondary functions.

 

How are folks syncing local DNS records across multiple Piholes?

 

I've been rebuilding all my content hosted on a Synology NAS + Proxmox installed on a NUC, and moving it to a dedicated box with beefy/brutal stats. I was messing around with Proxmox and unprivileged LXC containers for a while, using a ZFS pool on the host, and passing though using mount points while mapping the users in the container to groups on the host. It was going pretty well except I had (what I thought) was insanely odd and inconsistent behavior. In summary, in the same LXC, I could pass through two mount points with the same users and permissions (etc.) and one would show up mapped correctly, and the other wouldn't.

I gave up on that approach after a few unhelpful responses of "you're doing it wrong." That may be the case, but I was more focused on why the issue was inconsistent rather than just failing.

I'm now running an Unraid VM, with my HBA (and USB stick) passed through, lots of RAM, and an 8-pack of processors. I thought Unraid was pretty slick when I ran the trial a while ago, and was kind of unimpressed with it's performance in this configuration. After getting all the drives configured correctly (made the mistake of mixing up "array" and "pool" after my initial foray into zfs), and weeding out three bad drives from ServerPartDeals, I had a stable array, all my LXC containers configured on Proxmox, NFS going over a dedicated, local bridge (10.10 for the WIN!), and my data moved over from the old NAS, I was pretty happy.

During the whole process, I had been watching/monitoring lots of odd behavior on Proxmox, with Unraid, and with my data transfers. My Pihole instance was going crazy with load averages, even though it was reserved for the LXCs on the host, rather than for the whole house, and the IO pressure stall was over 90% constantly. Given that I had several bad disks out of what I ordered from the supplier, I thought I was dealing with some crazy stuff. I was taking down the LXCs and VM one-by-one, trying to find where that stall pressure was coming from.

As I was troubleshooting, I was wondering if it was maybe IO pressure on the host OS disks (NVMe drives directly attached to the motherboard, zfs mirrored), and did a quick "zpool list." Hmm. That's funny. Why is my old destroyed (or so I thought) pool still showing up??? When I first switched to Unraid, I exported my pool (doom-pool) and then imported it in Unraid after I passed through the HBA. After deciding that ZFS was nice, but not necessary, I destroyed the pool in Unraid, and reconfigured for a standard xfs array. It looked like, somehow, the export of the pool, import, and destroy did something strange, and the drives were showing up as online and in use on the host still. I tried to kill the pool again on the host, and everything would sit and spin.

I ended up shutting down the host and needing to cut power (zfs services were hung for about 12 minutes before I decided it was ok), and when I rebooted, the old pool was gone from the host, and (holy moly) everything was working better. The IO pressure was gone. The CPU spikes and lags were gone. Pihole wasn't going nuts any more.

The one thing I haven't tried yet is to do some disk-to-disk copies on Unraid. This was one of those places where I saw aberrant behavior, and transfer limited to 120MB/s (I have 14TB SAS 12Gb drives in my array), but I don't have any heavy files I need to move. Right now I'm just happy that it wasn't more bogus hardware, or a problem with my HBA or motherboard or something. Anywho, just wanted to share.

[–] GeekyOnion@lemmy.world 5 points 2 weeks ago (1 children)

I’m hoping it’s resolved before I get off work!

[–] GeekyOnion@lemmy.world 6 points 2 months ago

This is so cool! I want to look up when it was added when I’m back at a full sized interface.

[–] GeekyOnion@lemmy.world 10 points 3 months ago

You know what they say, “in for a penny, in for a pound.” Might as well get your time’s worth out of your prison sentence.

[–] GeekyOnion@lemmy.world 8 points 4 months ago (1 children)

Is this an alphabet or a bingo card?

 

I’m planning on clearing some space in my yard for a Rainier cherry tree, and I’m curious if anyone has some tips. I’ve read a couple of guides for selecting good spots, soil amendments, and placement near other cherry trees. Is there anything that caught you by surprise?

4
submitted 2 years ago* (last edited 2 years ago) by GeekyOnion@lemmy.world to c/seattlekraken@lemmy.world
 

While I’m not expecting a 10 goal blow-out, I would like to see us break up the winning streak the Stars have going. At this point in the season, I just want to see good, fun hockey.

 

Over the past couple of weeks, I've seen a lot of content that's ripping on Arch Linux, from pictures of stickers being removed from laptops, to comments about it having a lot of bloat or frustrating package management. Was there a change to their policies, strategies, or distro that has turned this once proud vessel into a floating psycho ward?

view more: next ›