Your website hasa banner that says it uses cookies and that by using it I acknowledge having read the privacy policy, but if I click More Information it takes me to a page the wiki says want created yet.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Never noticed. I don't do anything with the cookies anyway, its just a docuwiki self hosted, no ads, no data collection, nothing. I don't even store logs.
I might need to write the privacy policy... Will do tomorrow.
If you don't process any user data beyond what is technologically required to make the website work, you don't need to inform the user about it.
Im not familiar with doku wiki but here's a few thoughts
- privacy policy is good to have regardless of what you do with rest of my comments
- your site is creating a cookie "dokuwiki" for user tracking.
- cookie is created regardless of user agreement, rather than waiting for acceptance (implied or explicit agreement). As in i visit the page, i click nothing and i already have the dokuwiki cookie.
- i like umami analytics for a cookieless google analytics alternative. They have a generous free cloud option for hobby users and umami is also self hostable. Then you can get rid of any banner.
The dokuwiki cookie is not for user tracking but for functional use. You don't need user consent for functional use. OP should remove the useless cookie banner altogether
Afaik the cookie policy on your site is not GDPR compliant, at least how it is currently worded. If all cookies are "technically necessary" for function of the site, then I think all you need to do is say that. (I think for a wiki it's acceptable to require clients to allow caching of image data, so your server doesn't have to pay for more bandwidth).
i have double checked but i do not have any banner on my wiki at all... Where did you see one? The only cookie is a technical cookie only used for your preferences and no tracking.
My only issue with it is that on my iphone, the app constantly freezes and says I have 3 photos left to upload. It’s almost certain to freeze for a few minutes and the upload becomes stalled as well. This behavior made it take a long time to backup my library and it makes it a pain in the ass to share photos quickly with people. Popping into the webUI has none of these issues (just no uploading of my photos). I still quite love the app
I'm using immich for half a year or so now. There only problem is that it did not chunked uploads. So one large video just never uploaded, and I had to use nextcloud to upload it instead. Otherwise, it's great.
Yes, i encountered this issue as well. Seems that tweaking NGINX setting helped. Still stupid that a large upload will stall all the others.
If you're self hosting Immich on your local network, I've gotten around this by setting the Immich app to use my local ip address while on my home wifi network.
Haven't checked in a while but is there any hope for cloud storage of the image library yet? I'm kind of holding out for S3 support because I don't want to manage multiple terabytes locally.
I don't think immich supports this natively but you could mount an S3 store with s3fs-fuse and put the library on there without much trouble. Or many other options like webdav.
I love immich. I just wish for two things:
- synchronised deletes on client server
- the edit tools on mobile to actually work on the photo at hand instead of creating a new photo with new metadata. May as well not have the tools, tbh.
What is synchronized deletes on client server?
I assume that server-side asset deletions are applied to client libraries. I.e. If I take a picture on my phone, but then later delete the picture from immich on another device, it will then also remove the original copy on the client (phone) that took it.
Yes, more control over what happens between server/client. Sorry, that wasnt clear.
Immich aims to be a google photos replacement, which has this function built in.
Now, I don't care if it works only one way, but it should be clear.
That's why I only use immich as a gallery and not as a photo backup solution. I manage the syncing with syncthing.
Assuming you mean Android, FYI syncthing for android is discontinued, so you might want to look into other options.
https://github.com/syncthing/syncthing-android?tab=readme-ov-file#discontinued
There has been a fork out for a long time now that is still developed, it's called syncthing-fork
I'm curious;
Which ML CLIP model did you go with, and how accurate are you finding the search results?
I found the default kinda sub-par, particularly when it came to text in images.
Switched to "immich-app/XLM-Roberta-Large-Vit-B-16Plus" and it's improved a bit; but I still find the search somewhat lacking.
The best one I have found was one of the newer ones that was added a few months ago. ViT-B-16-SigLIP__webli
Really impressed with the accuracy even with multi word search like "espresso machine"
How well does it do with text in images?
I often find searching for things like 'horse' will do a decent job bringing up images of horses, but will often miss images containing the word 'horse'.
It does ok with that. better than the default model, but worse than the built in search on my phone.
Thank you for this. I plan to look at the authentication part more closely, but that's the part I can't quite figure out (being an amateur at this stuff but still trying), since I'm nervous with just a password accessing it remotely or from the phone.
Authelia, NGINX, there is so much that's confusing to me, but this might help.
I’d recommend setting up a VPN, like tailscale. The internet is an evil place where everyone hates you and a single tiny mistake will mess you up. Remove risk and enjoy the hobby more.
Some people will argue that serving stuff on open ports to the public internet is fine. They are not wrong, but don’t do it until you know, understand and accept the risks.(’normal_distribution_meme.pbm’)
Remember, risk is ’probability’ times ’shitshow’, and other people can, in general, only help you determine the probability.
Very low WAF score tough.
You mean ”hardcore WAF challenge”?
More like hardcoded WAF challenge.
good general advice until you have to try to explain to your SO the VPN is required on their smart TV to access Jellyfin.
Then you expose your service on your local network as well. You can even do fancy stuff to get DNS and certs working if you want to bother. If the SO lives elsewhere, you get to deploy a raspberry to project services into their local network.
deploy a raspberry to project services into their local network
This piqued my interest!
What's a good way of doing it? What services, besides the VPN, would run on that RPi (or some other SBC or other tiny device...) to make Jellyfin accessible on the local network?
Well, I’d just go for a reverse proxy I guess. If you are lazy, just expose it as an ip without any dns. For working DNS, you can just add a public A-record for the local IP of the Pi. For certs, you can’t rely on the default http-method that letsencrypt use, you’ll need to do it via DNS or wildcards or something.
But the thing is, as your traffic is on a VPN, you can fuck up DNS and TLS and Auth all you want without getting pwnd.
Feel free to ask, even in pm, if I can help. Not a guru myself, but getting a bit more experience overtime.
How did you do external backups?
I used to use a docker container that makes db dumps of the database and drops it into the same persistent storage folder the main application uses. I use this for everything in docker that had a db.
Immich as recently integrated this into the app itself so its no longer needed.
All my docker persistent data is in a top level folder called dockerdata.
In that I have sub folders like immich which get mounted as volumes in the docker apps.
So now I have only 1 folder to backup for everything. I use zfs snapshots to backup locally (zfs auto shot) and borgmatic for remote backups (borgbase).
All my dockers all compose files that are in git.
I can restore he entire server by restoring 1 data folder and 1 compose file per stack.