Auth0 Home Blog Docs

Authorization Extension fails with blocked event loop when signing-in

api
login
error
api-authorization

#1

We are experiencing some issues lately where a user authenticating into our system would often receive an error “Authorization Extension: Blocked event loop”.

Usually a retry would go through just fine. Any leads to why this is happening or any workaround?

Info:

  • Auth0 C# Authentication Api v3.7.0
  • AWS

#2

I believe we are getting this error when users login, but where I’ve noticed it to a greater extent is when adding a member to a group (via the authorization api). I’m able to replicate it once every 10-20 attempts. Here’s what the authorization extension rule (or whatever is executing the rule) is logging. I retrieved these logs by using the debug rule console in the auth0 admin ui. Note: this is not a custom rule, it is the rule that is generated by the auth extension and there are no other custom rules or hooks that run during this process.

6:08:41 PM: 2017-09-15T23:08:41.623Z - info: Starting Authorization Extension - Version: 2.4.2
6:08:41 PM: 2017-09-15T23:08:41.625Z - info:  > WT_URL: https://sandbox.it.auth0.com/api/run/traena-dev/adf6e2f2b84784b57522e3b19dfc9201
2017-09-15T23:08:41.625Z - info:  > PUBLIC_WT_URL: https://traena-dev.us.webtask.io/adf6e2f2b84784b57522e3b19dfc9201
6:08:41 PM: 2017-09-15T23:08:41.625Z - info: Initializing the Webtask Storage Context.
6:08:41 PM: {
  "code": 500,
  "error": "Script generated an unhandled asynchronous exception.",
  "details": "Error: Blocked event loop",
  "name": "Error",
  "message": "Blocked event loop",
  "stack": "Error: Blocked event loop\n    at <anonymous>:1:38\n    at nextTickCallbackWith0Args (node.js:489:9)\n    at process._tickDomainCallback (node.js:459:13)"
}
6:08:41 PM: finished webtask request 1505516919715.82915 with HTTP 500 in 2040ms
6:08:41 PM: Code blocked the event loop for longer than 2000ms and is now terminated.
6:08:41 PM: setting tenant quarantine
6:08:41 PM: faulting webtask container following request 1505516919715.82915 failure

This leaves my users in a half created state. They are in auth0 as users, but not in the group. I have to maintain logic to retry when I receive this 500 error from the authorization api.


#3

Thanks for the answer dom, a github ticket has been opened here and I hope a solution will be found soon.


#4

I’d be hoping to get an answer from an official member from Auth0 regarding this issue since it is happening quite often and our end-users are seeing the issue too (which doesn’t look good or professional).

Could a member of the Auth0 team kindly give us an update on this issue please?

Thank you


#5

Hey Auth0, this issue is happening to us too.

Is it tracked anywhere else? The silence is a little concerning - will need to start considering other IDPs as this issue makes Auth0 unusable.


#6

Having the same problem ones a day when the user tries to log-in. Any solution ?


#7

Still no answer from the auth0 team after almost 3 weeks. This is quite disappointing. The github ticket hasn’t received any update either.


#8

@jmangelo would you be able to help with this issue?
It keeps happening fairly regularly.
Thank you


#9

I’d be hoping to get an answer from an official member from Auth0 regarding this issue since it is happening quite often and our end-users are seeing the issue too (which doesn’t look good or professional).

Could a member of the Auth0 team kindly give us an update on this issue please?

Thank you


#10

Hey Auth0, this issue is happening to us too.

Is it tracked anywhere else? The silence is a little concerning - will need to start considering other IDPs as this issue makes Auth0 unusable.


#11

Having the same problem ones a day when the user tries to log-in. Any solution ?


#12

Still no answer from the auth0 team after almost 3 weeks. This is quite disappointing. The github ticket hasn’t received any update either.


#13

@jmangelo would you be able to help with this issue?
It keeps happening fairly regularly.
Thank you


#14

@admin7 @davidames76 @dom @huseyin well, I haven’t discovered much; I can confirm that this something we are aware and trying to address, however, there’s not really much more I can share at this time. I’ll keep you posted if I come with something more useful.


#15

I’ve been following this because we’ve also been impacted. @jmangelo you mentioned(auth0) having made a few preliminary changes - thank you for that.

One question I’ve had is if this occasional error occurs regardless of whether the extension’s data is stored via WebTask Storage or Amazon S3?

I’ve got on my todo list to switch to S3, before we move to a production state, so that we are not limited in terms space.

I’ll be continuing to watch whether or not we’re seeing this error, but any feedback about whether or not the choice of storage affects it would be greatly appreciated.


#16

I’ve been following this because we’ve also been impacted. @jmangelo you mentioned(auth0) having made a few preliminary changes - thank you for that.

One question I’ve had is if this occasional error occurs regardless of whether the extension’s data is stored via WebTask Storage or Amazon S3?

I’ve got on my todo list to switch to S3, before we move to a production state, so that we are not limited in terms space.

I’ll be continuing to watch whether or not we’re seeing this error, but any feedback about whether or not the choice of storage affects it would be greatly appreciated.


#17

The issue so far is not being treated as related to storage and the actions taken and future ones planned also do not have any association with storage. However, given that the error could be seen as somewhat associated with processing time I would not be comfortable to say that storage has absolutely no effect. In particular, you could argue that S3 would allow for bigger data sets so it could somehow have an impact even if just in the frequency, but like I said this is not being traced to a storage related issue.


#18

@admin7 @davidames76 as I mentioned before we had a few actions planned to try to address this and although the first set of measure did not result in a noticeable improvement we have now made a few other changes. Assuming you have your tenants/domain in the US region can you please let me know if you notice any improvements? In addition, it would be worthwhile to know if in the tenant where you experience this issue you also have active usage of other extensions, in particular, if you have configured any of the log shipping extensions? Thanks in advance for any info you can provide.


#19

Hi @jmangelo.
We are affected by the same issue.
Seems that it occurs 1/40 requests for token delegation.
This is causing logging out the user from our product which is very uncomfortable for them. Therefore we are forced to do some at least “not-nice” workaround like f.e. retries for 401 - not good architecture pattern :confused:

We are using all of the Rules and also we are using this Extensions:

  • Auth0 Authorization
  • User Import/Export
  • Real-time Webtask Logs
  • Auth0 Authentication Debbuger.

#20

Hi, we (Auth0) are working on this issue. I want to assure everyone here that, currently this issue has the top priority. We are testing a fix candidate and also in the last 1 hour we put in place a temporary workaround for US region that should relieve the situation for US tenants. I will update this answer, as we progress.

UPDATE: A fix has been implemented. We will update with the ETA on when it will hit the production and the development tenants as soon as we have it.