Let’s continue discussing this in here…
I think I misunderstood the original post by @michielbdejong (Features app developers really really need). I think all the features are things that are somewhere on the horizon, but I didn’t realize this was for this draft of the spec.
The reason I misunderstood is because I don’t think anyone at Unhost this past September said we need ACLs in the next draft of the spec, at least I don’t remember that being the discussion. There’s a lot of thought that would need to go into that before we considered drafting a spec or adding it to the existing one.
So when you brought it up I assumed you were saying, in general, which features are things we should consider down the road. To which I said all of them. As for what should go in the current upcoming draft of the spec, I think the minor additions have already been discussed at length both during the conference and afterward. Draft-dejong-remotestorage-02 txt
Anywho, I’m still up for discussing & exploring the idea of ACLs in remoteStorage here, if anyone has anything to add. @michielbdejong Do you actually want to discuss ACLs, or did you only bring it up to add things to the list of requests made to change the spec (to make it sound like there were tons of requests made for immediate additions to the spec)? Or do you have some other thoughts on the topic?
yeah i agree there’s not enough time to do ACLs in this spec, would have to be the next one then, if anybody is interesting in picking it up.
for this spec, i created two versions, one official ‘base’ functionality, and an alternative version which people can implement if they’re interested. i only added the filesizes feature, though. not the other three.
But, aren’t the HEAD requests an expected behavior of HTTP?
Yes, but please don’t discuss that in the ACL topic.
Very very old topic, but have been looking for a way to do this recently. Was wondering if it was still on the cards…
The reason for it was to be able to create an app like a todo list using RS with the ability to be able to share a todo list. It could also be useful for note taking/drawing sharing apps. For this, you would need to be able to grant access to “someone” to a specific file / folder (finer-grained than the current spec allows).
App flow could be:
- (user1) Create a new todo list
- (user1) Click a share button
- (app/server) creates a token for that specific todo list (for a specific user)
- (user1) Gives that url (eg https://firstname.lastname@example.org/#token=500P3R53CR3T) to the other person
- (user2) Accesses url
Apps could potentially store the tokens they receive for other peoples RS files/folders in the users RS storage, so that the user2 could access user1’s files/folders on another instance.
This mechanism could also be used to given certain people read-only access to files/folders (different from the public folder in that only certain people would have access).
Saw there was some interesting other work going on that is could be useful for this work:
- Showing authorised apps (@xangelo) https://github.com/remotestorage/armadietto/pull/47
- Generating tokens for server apps (@DougReeder) https://github.com/remotestorage/armadietto/issues/75
Also related to another ancient topic:
You seem to be aware of the past conversations around this topic. So as you may know, the philosophy of RS is to stay as simple as possible for a personal data store, which only provides public access to others via URLs in the
public folder. And as other people’s identity isn’t known to the RS server, there cannot be any complicated permissions/ACLs for such use cases.
So, considering this background, your approach is much simpler, and I think it could work. However, I’m not sure (authorized) foreign access should ever make it into the protocol as a principle design decision. The way RS would currently allow for this is by exchanging data via a p2p (or other) transport and simply storing a version in every collaborator’s own storage. But this is not ideal for scenarios where you want to remove someone’s access to data entirely, of course, since they will always have a backup of the last version they were able to access.
For the sake of argument, let’s say the outlined approach is something we’d want to add to the protocol in some way. One problem I see with it (as is) would be that you have to keep track of which token belongs to whom, which requires a bit more detail in the spec and implementation than only adding tokens for arbitrary subdirectories. I guess there would have to be the possibility of adding a string of information to the token, but it could become rather messy if different collaborative apps use different identifiers for other people.
What do others think? Maybe @michielbdejong, you had some (new) thoughts about it in the last couple of years?
Yes, if you compare remoteStorage with Solid, the two big differences are really:
- Solid has ACLs
- Solid has a deeper connection with RDF
The way Web Access Control works is that it mentions the WebID of the person who gets access to a certain file, folder, or folder tree. It then uses OpenID Connect and proof-of-possession tokens to allow an app to prove that it acts on behalf of the WebID that is listed in the ACL. We could do the same with webfinger addresses.
However, even in the Solid protocol, we see the downsides of having such a more complex part in the spec: the latest version of Solid says that servers should either implement WAC or ACP (Access Controll Policies) which is a competing ACL system. The main reason some people prefer ACP over WAC as that it allow them to define not only ALLOW rules but also DENY rules, and that these DENY rules can not be overruled in subfolders.
The other issue with both WAC and ACP is that it quickly becomes infeasible to express access grants in terms of access to specific files and folders. So if you look at Solid Application Interoperability (SAI), which is basically their equivalent of our remoteStorage modules, it bypasses both WAC and ACP altogether, and grants access based on shape trees that define what type of data is inside a given file, rather than on a per-resource basis.
So ACLs as they have been explored in Solid with WAC and ACP give resource-location-based access control, but what you probably want is some sort of semantic access control. Even so, you could of course have server-wide ACLs that refer to shape trees instead of referring to specific folders. And maybe we can do something like that, in parallel with how SAI evolves in the Solid project.
Another side note, if you think about data that can move around freely, as is often a requirement in data portability projects, linking access control to specific servers is probably not the right paradigm. We ran into that with the Federated Timesheets project (Federated Timesheets Community Group) and so maybe the solution is to package the data and the ACLs about it into one transportable container, but then we have to trust all servers where this data ends up to actually apply the ACLs correctly, and often the devices to which data is allowed to move should also be described there in a machine-readable way, alongside the information about which user is actually allowed to see the data.
In local-first paradigms like “Live Data” (see m-ld.org) and “Liquid Data” (see blogpost that I haven’t written yet, Liquid Data · Issue #21 · federatedbookkeeping/task-tracking · GitHub) it doesn’t even mean anything anymore to have “write access”, since it’s always allowed to write changes to your own local copy, and whether or not it’s allow to then sync that change to a remote node is sometimes not even a synchronous or deterministic decision.