Status 412 when storeObject() quickly repeated

My app saves on every keystroke, so when the user is typing quickly, privateClient.storeObject() is called many times a second with a new version of the same object. Usually, this works fine, but sometimes the server will return status 412 “Precondition Failed”.

Should I ensure I don’t call storeObject() again until the previous call has returned, or otherwise throttle updates?

Yes, that would be advisable. Usually, it’s enough to debounce such keyboard event handlers in a way that not every keystroke immediately triggers a storeObject() call, but only if someone stops typing for a second for example. This is not just better for libraries that sync data, but also for your JavaScript main thread, which will otherwise get blocked by every keyboard stroke.

1 Like

I use throttles that wait for 300ms after the last keystroke.

In my data module, I’ve implemented a promise queue, so a call to privateClient.storeObject() isn’t made until the promise of the previous call has been resolved.

If I submitted a PR that implemented this in remotestorage.js, would that be an acceptable change?

It the block/queue is scoped to the specific object’s path, then I don’t see how it could be detrimental.

I’ve taken a look at the code that generates the promise returned from storeObject(), but it was rather convoluted. There is already some kind of queue - I don’t think I’ll be able to work on this, at least until it’s better documented.

Even with the queue, I’ve had to impose a delay between calls to storeObject(). And 700 ms did not prove enough in my tests; I had to go to 1000ms, to prevent conflicts from occurring.

If I haven’t miscoded something, this is likely to be a significant bar to adoption. Any library & protocol will have an upper limit to how fast records can be updated, but 1 update/second will be tough to sell to app developers.

Could you share your code perhaps? Both Litewrite and Hyperdraft update documents while typing, and I never saw conflicts with either myself.

i perceived a limit of 1 update per second when i was calling storeObject directly, maybe double-check if the throttle is working as expected, or yeah post some code so we can have a look.

Remoteprobe (GitHub - DougReeder/remoteprobe: Test how RemoteStorage responds to rapid updates) is a test app, which just repeatedly calls client.storeObject(). I usually see at least one 412 status, even when the updates are set to every 300ms.

(FWIW, all my tests have been with 5apps as the backend).

1 Like

This is something I’ve noticed as well (@rosano it was in one of my many convoluted emails) and it was happening on my own hosted storage instance, not 5apps. I’ve been bashing my head against it for a bit but, I haven’t really dug into the client-side code around this just yet. If I’m understanding the problem… I think it might be a server-side issue.

It appears that occasionally the etags get out of sync. This is what the server uses to check versioning on the document when you attempt an update. When it doesn’t match, the server returns a 412.

EDIT: THIS IS ALL WRONG AND WAS JUST SPECULATION ON MY PART. PLEASE SEE Status 412 when storeObject() quickly repeated - #12 by raucao BELOW. JUST LEAVING THIS FOR POSTERITY.

5apps, I believe, uses an s3 backend for data storage. So it’s entirely possible that we’re hitting a stale metadata read. Essentially we write the data to s3, but when we attempt to read it immediately after we get old information. This is just because it takes a while to replicate the data across s3, so you may just hit a stale object.

Increasing the timeout would potentially help, since you’re giving more time between read/write commands to the server.

We (at 5apps) store metadata in Redis, and only return 200s for PUTs when the ETAGs have been written there. There is a remote chance that you could receive an outdated ETAG, but I don’t think it’s very likely.

I would rather suspect the client to queue the additional update with an old/wrong ETAG in the If-Match request header, which is sent to ensure that you don’t overwrite something that you have not already seen locally.

Example:

  • You are in a synced state for document foo.json, with ETAG 123 known both to the server and you
  • You modify the document locally
  • A PUT is queued with ETAG 123 in If-Match, to only overwrite what you have synced
  • Another PUT is queued with the ETAG 123 before the previous PUT has returned a new ETAG
  • The previous PUT has now updated the document on the server, and given it ETAG 456
  • The second PUT is trying to update foo.json with ETAG 123
  • The server refuses to overwrite the document, returning a 412

Warning: this is just a guess for now, as I haven’t looked into when the headers for the PUT are constructed. The Sync class is one of the most complex in the codebase, and none of the current maintainers have originally written it. The TypeScript port was actually motivated by trying to refactor that class and not being able to keep track of the types of data being moved around the many functions that are involved both inside and outside of that class. But perhaps the example is at least helpful for finding the actual bug in the end.