Thank you for being candid, that sounds like an admittedly poor experience.
We’ve recently passed this thread directly to our product team and they are aware of the limitations of this feature and have plans to investigate possible solutions.
I don’t have any timeline, but I appreciate your patience while our team takes some time to evaluate.
Yea the caching available within actions is a good start, but I think the client credentials flow needs some kind of mechanism for a) getting a reference to a token after it is generated, and b) a mechanism for telling the flow to exit successfully with a token retrieved from the cache before the token is generated.
Actually a follow up on this for anyone who comes across it. We ran out of our tokens for Feb and I just implemented a workaround which is fine for us since we control all the clients - we only use M2M tokens for some service-to-service calls as well as bash scripts and some lambdas.
I created a new Database named “system” with a single user in that DB, and only authorized our M2M application to use it. Then changed our core auth class which generated the M2M tokens to use the normal username/password auth with the realm of “system”. I had to slightly change our Login Action which sets some custom claims and validates the user against our API… but was pretty minor.
So essentially we’re not using M2M tokens / client_credentials grant anymore, and it’s just a normal user. We manipulate the token as needed in a custom Action and this seems to work fine. Our apps internally cache the tokens but there are a lot of edge cases (like local development) which basically mean we can still generate more than our quota. Since normal tokens don’t have the quota this works well for us and probably will for many others. Only slight pain is configuring an extra 2 params for this system user/password on top of the clientid/secret.
I would love a solution to this problem. It leaves clients of Auth0 wide open to uncontrollable costs. I’m fine if it requires some coding, but I don’t even see how it’s possible right now.
As @matt.howard says, we need… “some kind of mechanism for a) getting a reference to a token after it is generated, and b) a mechanism for telling the flow to exit successfully with a token retrieved from the cache before the token is generated.”
I rally wonder how that approach of billing for M2M tokens was supposed to ever work in real life. Again, the entities that are asking for tokens (generating cost) can be completely outside of tenant’s control. How is the tenant supposed to be responsible for the cost if he can’t control it?
Auth0, before you even try to change anything, what is your guidance?
We use M2M tokens as part of our automated qa process. there have at times been issues where more tokens were requested than necessary which pushed us over our limit. We would really like to see a quota per application/clientid feature to prevent overages.
So I managed to follow up on that with the team and there are plans around it to be implemented in the last quarter of this year. Thank you for your understanding!
Desperately hoping for news on this feature. I tried many different approaches using action flows, with no success. The nuclear option is to write my own endpoint for authenticating against our APIs.
Thank you for the update - is there any additional information regarding when we can expect Auth0 to deliver a built-in solution for caching M2M authentication tokens?
For context - we mainly use M2M tokens to allow our (B2B) clients to integrate with our APIs. While rate limiting will help mitigate overage costs for us, it would also prevent us from delivering a viable solution for our customers who expect consistent API access. With a native token caching solution, we can limit the impact of these client interactions on our M2M token limits before rate limiting is even necessary.