[-] Burn1ngBull3t@lemmy.world 5 points 2 weeks ago

Well that’s easy to remember !

13

Hello !

We have been discussing at work about hosting (internally) some work related stories that we find funny.

I've been looking for tools to do that should be quite simple, and display one story at a time nothing fancy.

Couldn't find anything quite like that, was wodnering if you guys knew one ? If not, i might develop it then and share it.

Thanks !

9

Hello !

I recently 3dprinted a train whistle that usually works with a mouthpiece. It works by simply blowing air in it.

However, I would like to convert it to a whistle for my bike. For that I would need a system that could blow air in it, instead of myself, with the press of a button.

Any idea on what i could start with to build that ? It would be best if the circuitry was quite compact too.

Thanks !

[-] Burn1ngBull3t@lemmy.world 5 points 6 months ago* (last edited 6 months ago)

Hello @theit8514

You are actually spot on ^^

I did look in my exports file which was like so :/mnt/DiskArray 192.168.0.16(rw) 192.168.0.65(rw)

I added a localhost line in case: /mnt/DiskArray 127.0.0.1(rw) 192.168.0.16(rw) 192.168.0.65(rw)

It didn't solve the problem. I went to investigate with the mount command:

  • Will mount on 192.168.0.65: mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test

  • Will NOT mount on 192.168.0.55 (NAS): mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test

  • Will mount on 192.168.0.55 (NAS): mount -t nfs 127.0.0.1:/mnt/DiskArray/mystuff/ /tmp/test

The mount -t nfs 192.168.0.55 is the one that the cluster does actually. So i either need to find a way for it to use 127.0.0.1 on the NAS machine, or use a hostname that might be better to resolve

EDIT:

I was acutally WAY simpler.

I just added 192.168.0.55 to my /etc/exports file. It works fine now ^^

Thanks a lot for your help@theit8514@lemmy.world !

5

Hello !

I currently have a problem on my kubernetes cluster.

I have 3 nodes:

  • 192.168.0.16
  • 192.168.0.65
  • 192.168.0.55

I use a storage class nfs (sigs/nfs-subdir-external-provisioner) to use an NFS.

The NFS is actually set up on the 192.168.0.55 which is also a worker node then.

I noticed that i have problems mounting volumes when a pod is created on the 192.168.0.55 node. If its one of the other two, it mounts. (The error is actually a permission denied on the 192.168.0.55 node)

I would guess that something goes wrong when kube tries to mount to NFS since it’s on the same machine ?

Any idea on how i can fix this? Cheers !

[-] Burn1ngBull3t@lemmy.world 2 points 6 months ago

Hello ! You might find Sylius suitable. It’s an Open source framework based on Symfony.

Im pretty sure it has all your requirements. The thing is that it’s a headless framework, so a frontend needs to be built on top of that if you want some custom features.

Hope that helps !

[-] Burn1ngBull3t@lemmy.world 1 points 8 months ago

Exactly thanks!

[-] Burn1ngBull3t@lemmy.world 2 points 8 months ago

Haha sorry indeed, it’s Kubernetes related and not Windows WeDontSayItsName related 😄

[-] Burn1ngBull3t@lemmy.world 1 points 8 months ago

You are completely right.

However in my mind (might be wrong here) if I use another node, i wouldn’t use the RAID array completely.

While setup up i thought that its either:

  • NAS storageClass attached to the RAID array, no longhorn
  • with longhorn when there is no RAID, but replication at 3

In either case, the availability of my data would be quite the same right ?

(Then there is options to backup my PV to s3 with longhorn and all that i would have to setup again though )

Thanks for your answer !

[-] Burn1ngBull3t@lemmy.world 1 points 8 months ago

Hello ! Thanks for your response!

Yes RAID is used as availability of my data here, with or without longhorn, there wouldn’t be much difference there (especially since i only use one specific node)

And you would be right, since the other nodes are unscheduled, it will be available only on my “storage node” so if this one goes down my storage goes down.

That’s why i might be overkill with longhorn, but there are functions to restore and backup to s3 for exemple that i would need to setup i guess

11

Hello selfhosted !

Continuing my journey of setup up my home k3s cluster.

I’ve been asking myself if Longhorn might be overkill for my home cluster,here’s what i did:

3 machines running k3s each. One of them has a storage in Raid 5 and I dont want to use any storage from the other two.

Thing is, i had to configure replicas to 1 in longhorn for my pv to be green.

Hence my question, since data is already replicated in the array, shouldn’t I just use a NFS storage class instead?

Thanks !

[-] Burn1ngBull3t@lemmy.world 7 points 9 months ago

Hello ! First question would be : why buy an external drive if you are buying a NAS in the first place ?

Just in case: there are 2 slots available in the NAS you sent, meaning you could buy 2 internal drives for its storage.

On the hosting part, Jellyfin might be able to run judging by the specifications of the NAS. However, you have to take into account if the NAS operating system can run it (maybe there is an app store for it like Synology) and also media transcoding might be limited (to easily stream around your house 4K content for exemple)

[-] Burn1ngBull3t@lemmy.world 19 points 10 months ago

Still from IT Crowd, it’s when Reynholm get sued by his exwife. The quote isn’t from that episode though

[-] Burn1ngBull3t@lemmy.world 3 points 10 months ago

I think you are right indeed, i had the idea to maybe use the GC for AI stuff and play with it. I would probably go with kube and add the NAS in longhorn (that i already set up)

[-] Burn1ngBull3t@lemmy.world 2 points 10 months ago

Would have been cool to add yet another machine to the cluster, especially if i could use the NAS for the kube VolumeClaims. 🤔

19

Hello selfhosted !

I had a thought I’d like to share with you.

I currently have a motherboard that was used to game and that i would like to reuse.

I thought of 2 ways to use it:

  • as a NAS
  • as a Steam stream server (i have a graphic card around too)

I also have other computers running Kubernetes (2 nodes)

My question is should i go Kubernetes for this on too, or on another path ?

Thanks

[-] Burn1ngBull3t@lemmy.world 10 points 1 year ago

It’s actually how people build their images, in which some include sensitive data (when they should definitely not). It’s the same problem as exposed S3 buckets actually, nothing wrong with docker in itself.

view more: next ›

Burn1ngBull3t

joined 1 year ago