this post was submitted on 06 Mar 2026
16 points (83.3% liked)

Sysadmin

13537 readers
1 users here now

A community dedicated to the profession of IT Systems Administration

No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
!lemmy@lemmy.ml
!lemmyworld@lemmy.world
!lemmy_support@lemmy.ml
!support@lemmy.world

founded 2 years ago
MODERATORS
 

I work on an HPC and often I have to share files with other users. The most approachable solution is to have an external cloud storage and recline back and forth. However there's some projects that are quite heavy (several TB) and that is unfeasible. We do not have a shared group. The following is the only solution I found which is not to just set al permissions to 777, and I still don't like it.

Create a directory and set ACL to give access to the selected users. This works fine if the users create new files in there, but it does not work if they copy from somewhere else as default umask is 022. Thus the only appropriate solution is to change default umask to 002, which however affects file creation system wide. The alternative is to change permissions every time you copy something, but you all know very well that is not going to happen.

Does it really have to be such a pain in the ass?

top 50 comments
sorted by: hot top controversial new old
[–] bjoern_tantau@swg-empire.de 11 points 1 week ago (7 children)

Uh, why not create the shared group? That's more or less exactly the purpose of their existence.

load more comments (7 replies)
[–] warmaster@lemmy.world 8 points 1 week ago (1 children)

I'm no sysadmin, I just run my homelab. Let me get this straight... You want to bypass system level access level restrictions with some form of control but not go through your company's standard method of doing so because of bureaucracy?

If that's the case: why not put something in front Like opencloud for example?

I mean, maybe OC is not what you need, but conceptually... would a middleman solution work for you? If so, you could go with a thousand different alternatives depending on your needs.

[–] ranzispa@mander.xyz 1 points 1 week ago* (last edited 1 week ago) (3 children)

A cloud solution is indeed an option, however not a very palatable one. The main problem with a cloud solution would be pricing. From what I can see, you can get 1TB for about 10€/month. We'd need substantially more than that. The cost is feasible and not excessive, but frankly it's a bit of a joke to have to use someone else's server when we have our own.

You want to bypass system level access level restrictions with some form of control but not go through your company's standard method of doing so because of bureaucracy?

Yes. Not a company but public research, which means asking for a group change may lead to several people in the capital discussing on whether that is appropriate or not. I'd like this to be a joke, but it is not. We'd surely get access eventually if we do that, but that would lead to the unfortunate side: if we work in that way every new person who has to get in has to wait all that paperwork.

[–] possiblylinux127@lemmy.zip 6 points 1 week ago (1 children)

Don't bypass your organizational policies

[–] ranzispa@mander.xyz 2 points 1 week ago (1 children)

I am not bypassing any policy: the HPC Is there to collaborate on and data can be shared. Not having a shared group is not a policy, it's just that not all users are in the same group and users are added to just one group by default. We are indeed allowed to share files, hell most of the people I want to share stuff with are part of my own research group. ACL is allowed on the HPC. I'm asking how to properly use ACL.

If you have anything actually useful go ahead, otherwise don't worry that I know better than you do what I should or should not do.

[–] possiblylinux127@lemmy.zip 1 points 1 week ago (1 children)

You are in way over your head

Stop now before you get yourself in hot water

load more comments (1 replies)
[–] Luckyfriend222@lemmy.world 2 points 1 week ago (1 children)

I think he meant self-hosting Opencloud

[–] warmaster@lemmy.world 2 points 1 week ago (1 children)

Yes. That's what I recommended. Self-host whatever middleman software. Opencloud, WebDAV, S3, FTP, anything he puts in the middle can accomplish what he wants.

[–] ranzispa@mander.xyz 1 points 1 week ago (1 children)

I see! Well, I currently do not have another server that has so much storage that we could use for thi purpose. Maybe in the future and that will solve a bunch of problems, this is only one of them.

We do have a storage server, but that is local only and backup only: not going to open it to the internet.

It is indeed a solution. What is absurd to me is to have to consider such a solution that requires two servers.

[–] warmaster@lemmy.world 2 points 1 week ago (7 children)

You don't need additional storage. It's one program you need to set up.

load more comments (7 replies)
[–] warmaster@lemmy.world 1 points 1 week ago

I recommended Self-hosting whatever middleman software. Opencloud, WebDAV, S3, FTP, anything you put in the middle can accomplish what you want.

[–] blackbirdbiryani@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (4 children)

I'm in a similar position as you. Our lab has a partition on HPC but i need a way to quasi-administrate other lab members without truly having root access. What I found works is to have a shared bashrc script (which also contains useful common aliases and env variables) and get all your users to source it (in their own bashrc files). Set the umask within the shared bashrc file. Set certain folders to read only (for common references, e.g. genomes) if you don't want people messing with shares resources. However, I've found that it's only worth trying to admin shared resources and large datasets, otherwise let everyone junk their home folder with their own analyses. If the home folder is size limited, create a user's folder in the scratch partition and let people store their junk there however they want. Just routinely check that nobody is abusing your storage quota.

EDIT: absolutely under no circumstances give people write access to raw shared data on hpc. I guarantee some idiot will edit it and mess it up for everyone. If people need to rename files they can learn how to symlink them.

[–] biber@feddit.org 2 points 1 week ago (1 children)

This is a pretty good idea!

In addition, I recommend having all data e.g. as a (private)datalad archive synchronized to Dataverse, osf, figshare or wherever - edits are versioned then

[–] ranzispa@mander.xyz 1 points 1 week ago (1 children)

I am generally using DVC to version data, are those better options?

[–] biber@feddit.org 1 points 1 week ago

I don't know, seems to be quite similar :)

load more comments (3 replies)
[–] twack@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

Here's someone that solved this by monitoring the directory using inotifywait, but based on the restrictions you already mentioned I'm assuming you can't install packages or set up root daemons, correct?

https://bbs.archlinux.org/viewtopic.php?id=280937

Edit: CallMeAI beat me with this exact same answer by 15 minutes.

[–] chris@l.roofo.cc 3 points 1 week ago

You can set acls on directories that get applied recursively. This makes ist possible to have all files be the correct permission. I am on the go right now but you should look into setfacl. It's been a while but I am pretty sure that worked. That way you should even be able to say which groups or users can do what with granularity.

[–] linuxguy@piefed.ca 2 points 1 week ago (1 children)
[–] ranzispa@mander.xyz 1 points 1 week ago (1 children)

I thought sticky bits were used to allow other users to edit files but not delete them. Do they also allow inheriting the parent directory permissions?

[–] linuxguy@piefed.ca 1 points 1 week ago (1 children)

I didn't intend and don't think the stick bit stuff will or could be a complete solution for you. You've got some oddly specific and kinda cruddy restrictions that you've got to workaround and when they get that nonsensical one ends up solidly in "cruddy hack" territory.

From the article:

group + s (pecial)

Commonly noted as SGID, this special permission has a couple of functions:

If set on a file, it allows the file to be executed as the group that owns the file (similar to SUID) If set on a directory, any files created in the directory will have their group ownership set to that of the directory owner

You could run something like https://pypi.org/project/uploadserver/ in screen or run a cron every minute that just recursively sets the correct permissions.

[–] ranzispa@mander.xyz 3 points 1 week ago (2 children)

Wow, that group +s seems exactly what I'm looking for! That actually looks like the clean solution I was looking for. I'll test it out and report back, I'll have to wait on Monday for the colleagues to be back in the server, but it seems very promising.

Thank you very much!

[–] sem@piefed.blahaj.zone 2 points 1 week ago (1 children)

Can you check back in here and let us know if it worked?

[–] ranzispa@mander.xyz 1 points 5 days ago

Hello, I'm back after trying. The g+s permission does not change group permissions when copying a file. However, I have observed that I can delete copied files even though those are 630 and supposedly I should not be able to modify them. That is good enough for me: as long as I can delete the stuff it's ok, if I need to modify I can always copy it somewhere else.

[–] linuxguy@piefed.ca 1 points 1 week ago

Wahoo! Best of luck!

[–] CallMeAl@piefed.zip 2 points 1 week ago (1 children)

I'm pretty sure you can do this by adding default user entries to the directory acl which will then be set on files added to that dir.

[–] ranzispa@mander.xyz 1 points 1 week ago* (last edited 1 week ago) (1 children)

Default user entries are in there and do work, however when copying existing files those get masked with the existing group permissions. As such, the only solution I found is to have everyone set their umask to 002 as otherwise we would not get write access to files which are copied and not created in place.

[–] CallMeAl@piefed.zip 2 points 1 week ago (3 children)

Ah, I see. Well its ugly, but you could inotify to trigger a tiny script to update the perms when files are added or copied to the share dir.

load more comments (3 replies)
[–] frongt@lemmy.zip 1 points 1 week ago (1 children)

A dedicated file sharing application.

[–] ranzispa@mander.xyz 1 points 1 week ago (13 children)

What do you mean? Is there an application that allows easily sharing files on one Linux system? That would be nice!

If you mean going through an external server or peer to peer transfer, that is not too feasible. I do not have other storage places with tens of terabytes available, and transfering that much data through some P2P layer, while feasible, would probably be even less user friendly.

load more comments (13 replies)
[–] poinck@lemmy.world 1 points 1 week ago (1 children)

I have a similar need and I am curious whether my current solution is any good:

The data of interest is on a server which can only be accessed with ssh inside the institution. I've setup a read-only nfs share to a server which has a webserver (https enabled). There, I set up a temporary webdav share to the read-only nfs mount point and protected with htpasswd, hence external institution members do not have accounts at our institution.

As soon as the transfer is complete I remove all the shares (nfs, webdav).

[–] ranzispa@mander.xyz 1 points 1 week ago

This is a good idea and something I may setup once we setup our own compute server. However at that point wouldn't a synced directory be a better fit for the purpose? Such as you define a directory on the external server to be used to share data and every user syncs it to their own share on the main server to get all the shared data through rsync or unison.

Just throwing it out there, I'm not sure if that fits your use case.

load more comments
view more: next ›