this post was submitted on 06 Mar 2026
18 points (80.0% liked)

Sysadmin

13471 readers
10 users here now

A community dedicated to the profession of IT Systems Administration

No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
!lemmy@lemmy.ml
!lemmyworld@lemmy.world
!lemmy_support@lemmy.ml
!support@lemmy.world

founded 2 years ago
MODERATORS
 

I work on an HPC and often I have to share files with other users. The most approachable solution is to have an external cloud storage and recline back and forth. However there's some projects that are quite heavy (several TB) and that is unfeasible. We do not have a shared group. The following is the only solution I found which is not to just set al permissions to 777, and I still don't like it.

Create a directory and set ACL to give access to the selected users. This works fine if the users create new files in there, but it does not work if they copy from somewhere else as default umask is 022. Thus the only appropriate solution is to change default umask to 002, which however affects file creation system wide. The alternative is to change permissions every time you copy something, but you all know very well that is not going to happen.

Does it really have to be such a pain in the ass?

you are viewing a single comment's thread
view the rest of the comments
[–] warmaster@lemmy.world 13 points 3 days ago (1 children)

I'm no sysadmin, I just run my homelab. Let me get this straight... You want to bypass system level access level restrictions with some form of control but not go through your company's standard method of doing so because of bureaucracy?

If that's the case: why not put something in front Like opencloud for example?

I mean, maybe OC is not what you need, but conceptually... would a middleman solution work for you? If so, you could go with a thousand different alternatives depending on your needs.

[–] ranzispa@mander.xyz 1 points 3 days ago* (last edited 3 days ago) (3 children)

A cloud solution is indeed an option, however not a very palatable one. The main problem with a cloud solution would be pricing. From what I can see, you can get 1TB for about 10€/month. We'd need substantially more than that. The cost is feasible and not excessive, but frankly it's a bit of a joke to have to use someone else's server when we have our own.

You want to bypass system level access level restrictions with some form of control but not go through your company's standard method of doing so because of bureaucracy?

Yes. Not a company but public research, which means asking for a group change may lead to several people in the capital discussing on whether that is appropriate or not. I'd like this to be a joke, but it is not. We'd surely get access eventually if we do that, but that would lead to the unfortunate side: if we work in that way every new person who has to get in has to wait all that paperwork.

[–] possiblylinux127@lemmy.zip 6 points 3 days ago (1 children)

Don't bypass your organizational policies

[–] ranzispa@mander.xyz 2 points 2 days ago (1 children)

I am not bypassing any policy: the HPC Is there to collaborate on and data can be shared. Not having a shared group is not a policy, it's just that not all users are in the same group and users are added to just one group by default. We are indeed allowed to share files, hell most of the people I want to share stuff with are part of my own research group. ACL is allowed on the HPC. I'm asking how to properly use ACL.

If you have anything actually useful go ahead, otherwise don't worry that I know better than you do what I should or should not do.

[–] possiblylinux127@lemmy.zip 2 points 2 days ago (1 children)

You are in way over your head

Stop now before you get yourself in hot water

[–] ranzispa@mander.xyz -1 points 1 day ago
[–] Luckyfriend222@lemmy.world 4 points 3 days ago (1 children)

I think he meant self-hosting Opencloud

[–] warmaster@lemmy.world 3 points 2 days ago (1 children)

Yes. That's what I recommended. Self-host whatever middleman software. Opencloud, WebDAV, S3, FTP, anything he puts in the middle can accomplish what he wants.

[–] ranzispa@mander.xyz 1 points 2 days ago (1 children)

I see! Well, I currently do not have another server that has so much storage that we could use for thi purpose. Maybe in the future and that will solve a bunch of problems, this is only one of them.

We do have a storage server, but that is local only and backup only: not going to open it to the internet.

It is indeed a solution. What is absurd to me is to have to consider such a solution that requires two servers.

[–] warmaster@lemmy.world 3 points 2 days ago (1 children)

You don't need additional storage. It's one program you need to set up.

[–] ranzispa@mander.xyz 1 points 2 days ago (1 children)

It is not something I can setup on that server, I would need a separate server to set up something of that kind.

[–] warmaster@lemmy.world 2 points 2 days ago (1 children)

If it's a compliance problem, I get it. From a practical standpoint, FTP or WebDAV don't require installing anything.

[–] ranzispa@mander.xyz 0 points 1 day ago (1 children)

Not strictly about compliance, setting up FTP or WebDAV is technically complex without root access. You'd have to account for the fact that sessions on a HPC are time limited. Probably you can come up with some workaround that way, but I'm not sure that is any better than my current setup.

[–] warmaster@lemmy.world 1 points 1 day ago (1 children)
[–] ranzispa@mander.xyz 0 points 1 day ago (1 children)

I didn't fully understand what this software does, but it looks pretty neat. However, regardless of what it does, this is not something that I could use in my case. This spawns a server and I imagine it can do it's thing only as long as the server is running. Which in my case it would be around 8 hours which is the login session time limit on the HPC. Moreover, I'd be running a potentially resource hungry process on a login server, which is a big problem. I could request a compute job and run this there I guess, but this would still be limited by the queue max time. Moreover, while not impossible, the login node to login node communication would be a pain in the ass. I'd have to either always connect to the same node to spawn it or to let everyone know of the IP the server is currently running on. And I'd have to do this manually every 8 hours. It is feasible, but this is probably a better software for other kinds of problems.

[–] warmaster@lemmy.world 1 points 23 hours ago* (last edited 23 hours ago) (1 children)

Previously you said you didn't want to duplicate the files because it was a ton of data. Now you're saying that accessing it on demand is impractical.

It's becoming difficult to help you. Not because of your technical context, but your attitude.

Copyparty introduction

[–] ranzispa@mander.xyz 0 points 22 hours ago

I'm grateful for all the help and advice in here. Duplicating the data is not a problem, we can have several copies of the data on the server, not an issue.

Having the data on an external server on the other hand may be a problem, because that would require quite a large amount of storing capacity.

I'm unsure what you mean by accessing on demand: data already is on the server and people can access it. My main pain point is that if people copy stuff in there rather than creating it in place I do not get write access by default.

The copy party software looks interesting for other applications, and I may pick it up for something else, but it is not something that would work in this case. As I have explained extensively, while spawning a file server would not be impossible, it would be a huge hassle with no real advantages.

[–] warmaster@lemmy.world 1 points 2 days ago

I recommended Self-hosting whatever middleman software. Opencloud, WebDAV, S3, FTP, anything you put in the middle can accomplish what you want.