AI proxy in RS servers

Hey, I’ve worked a fair bit with AI in my remoteStorage apps, but since the very beginning I had to write my plain API key into my remoteStorage data. This means any app with read access could steal my API key.

The ideal solution would be to let apps request AI usage at the moment of authentication with remoteStorage, and the user should be able to answer the request with the number of tokens the app may spend.

Technically, this would work as an AI proxy on the remoteStorage server. During the OAuth flow, the app requests an AI scope, and the user determines how many tokens to grant. The user sees this alongside the usual permissions and approves a token budget. The app then sends AI requests to a new endpoint on the remoteStorage server using its existing bearer token. The server holds the real API key, checks the remaining budget, proxies the request to the AI provider, and returns the response. The app never touches the key.
This would let people have one unified and easy place to manage their AI from. In many cases I saw myself forced to create my own backend or having to think about possibilities of subscription models to let users of my apps have AI features.

I’d love to hear thoughts. It definitely still needs some thought, i.e. what if some user would like to use a locally hosted provider like Ollama but hosts on 5apps? Or what if one model is more expensive than the other — should a budget in monetary value be given?

The question is about you as an app developer offering AI features in your RS apps to users, without hosting your own back-end, correct?

I think this is more of a general question about unhosted apps than it is about mixing non-storage functionality into RS server software.

The storage should probably only save bookmarks for configured services and maybe AI-related preferences, perhaps in a standardized data module that other apps can re-use.

Since none of the big providers allow per-user OAuth for their LLM APIs, I think the only viable solutions today are:

  1. Let users configure their own AI provider API keys (put in localStorage/IndexedDB, not synced to RS)
  2. Let users configure a local API; you can likely add some presets for common ones