I am using the api.cache functionality in an M2M api to set key values. As a test I am setting the key based on the application name. I have 2 applications. On each call to request a bearer token for the api I am simply writing the key and incrementing the # of times called and then issuing a deny. I am not setting an explicit expire time. Per the documentation the value should be cached 15 minutes.
…
What I have found is values ARE BEING CACHED however there seems to be multiple copies of the cache as I am getting random values returned as I count up per key. It is like the endpoint is load balanced and each node is not aware of the other nodes.
…
Is this expected behavior?
Our engineers responded; this behavior is expected.
The main use-case solved by the cache is for an API access token, where the tokens are relatively uniform, but requesting additional tokens is costly. It is not meant to replace a database.
Is there an article outlining a valid use case for the API cache? Or can you explain an example of how the main use-case you are describing might work?
Without sticky sessions (you can’t actually depend on hitting the same cache each time you call a service) I am struggling to understand a valid use case.
Thanks for the link. This was helpful. Even with the limitations of not necessarily returning to the same node (and hitting the same cache) and the 15 minute max on cache times, the feature is good enough to prevent overt authentication spamming. I define authentication spamming as follows.
client requests a bearer token using their client_id and client secret
client runs a request with bearer token
client makes no effort to cache their bearer token and for subsequent requests just repeats #1 and #2 above.
This is easier to program (albeit slower) but would quickly blow through authentication request. Rather than use an external caching mechanism to front end these request, which would have its own issues with uptime, reliability, failure, etc., I was hoping to use something in the platform to accomplish the same.
The api cache feature as outlined by the link I think handles this if not in the most optimal way in terms of the actual client calls, it is good enough to get the job done.