I’ve been running into the “generated token is too large” error off and on for a while now - there’s a link in the documentation for id_token, but it was to the old community site and now just redirects to the homepage here.
I’m building a SPA that uses the Lock popup mode for authentication. I’m trying to link AWS keys to the id_token, and obviously more than one of them bumps up against the URL limit so instead of passing back the id_token I’ve been passing back an access_token that I can then use to access the id_token I enhanced in a rule. This works on some users, but not on others - for them I continue to get the error that the generated token is too large. I’ve lowered the scope on my Lock instance to just openid, as well as the audience to an API with minimal scope access. As the token being passed back is an API token that I’m using in lock.getUserInfo, I’m not sure why it’s failing on some users and not others. The only difference between the two is one has a username that’s 2 characters longer, and two small user_metadata claims that aren’t called at all.
Basically, here’s the authorization flow I’m using, I’m not certain if it’s correct:
- Open Lock with scope of ‘openid’ and audience of a minimal API
- Log in and run rule that attaches multiple AWS tokens to the id_token, as well as some app_metadata information (right now, all that information is the same for every user so that’s not the issue)
- Pass back access_token in the URL that allows the frontend to call getUserInfo, parse the hash manually in lock.resumeAuth and store the access_token
- Call getUserInfo using that access_token, to return the profile that we just attached the AWS tokens and app_metadata to
- Store tokens and data in application
Is this the correct way to go about this? Why is it failing on near identical profiles? Is there a way to pass back something smaller that will still work for my purposes?