I am building a RESTful API using Auth0 as our authentication method. However, I am running into rate limiting, even after converting to a paid plan and setting our tenant environment to “production”.
Currently, I expect anyone calling my API to pass the (non-JWT) token we get from a successful Auth0 login. I then pass that token to https://OURCOMPANY.auth0.com/userinfo to make sure this is an existing user, and use the subject from userinfo’s response to match the Auth0 user to a user in our database.
The problem is that I am being rate-limited after around 10 rapid requests, which routinely happens when I run a suite of regression tests which hits my api several times in the space of a second or two, which in turn repeatedly calls /userinfo to make sure I have a authed user each time.
I understand that developers are expected, generally, to decode the JWT token and manually validate its claims. However, what if the user has been deleted or blocked since then? Is there any way to query back to Auth0 on a regular basis to make sure the user remains valid?
The recommended approach for your scenario would be to use API Authorization (configure the REST API as an independent entity in the APIs section) so that client applications can request access tokens suitable to your own API. At this time that would indeed imply the issuance of JWT access tokens that the API would validate by itself, however, in future other access token formats (including opaque access tokens that would have to be validated by calling the authorization server could be supported). The important thing to have in mind here is that the access token was issued to your API and the API would validate that association (in the JWT format that is accomplished by audience claim validation).
Have in mind that an OpenID Connect access token plus a call to/userinfo is not the same thing as an OAuth 2.0 opaque access token plus a call to a token introspection endpoint (the common name to the endpoint that would allow for validation of an opaque access token). With this in mind the rate limits you experienced are explained because the user information endpoint is not really meant for the scenario you’re trying to use it for.
Finally, if the information made available by the resource server is sensitive enough to warrant a more tight control on end-users you can consider reducing the lifetime of the issued access tokens. In this way, if an end-user is removed from the system the short lifespan of the access token will mean that access will be blocked the next time the client application tries to obtain a fresher access token.