Hello community. We are trying to set up the supported extension “Auth0 Logs to [AWS] CloudWatch”. We actually have it configured both in our test & prod tenants, but we have the same problem in both tenants: the logs that are being shipped to CloudWatch are 1 month old (so pretty useless).
The problem (I believe) is simple to state: since we have an enterprise account, Auth0 is keeping (nicely) 1 month of event logs for us, but this extension can only ‘ship’ a small-ish number of log events per cycle (5 minute min cycle), and it is starting from ‘the beginning’ of our logs, and so it can never ‘catch up’ to today. In fact, it wouldn’t even keep up even if we could tell it “don’t start from the beginning of the logs, just start with any that are from now+”, because we (our users) are generating more log events than it can process.
When we configure it, the “BATCH_SIZE” is defaulted to 100, and increasing that number seems to have no effect. In the Github repo, I see this line of code:
That is correct, the batch size is limited to 100 log events per batch. However, multiple batches are sent during each cycle. We’re using the Sumologic extension so it’s not an apples-to-apples comparison, but we had no trouble getting our 30 days of historical data into Sumo. You could try setting the START_FROM parameter in the extension’s config:
Thanks for that reply. In my ignorance, is the START_FROM (checkpoint id to start from) best set using an existing (Auth) log entry’s ‘log_id’? (e.g. “log_id”: “90020180726221336280679066259347277553541578856958263410”)
I believe that is correct. I seem to recall using it in the past and using the log_id of the log I wanted to start from, but I’m going on memory here. Would help if the docs were more explicit. I’ll try submitting a PR to add an explanation of START_FROM to the docs.
They haven’t added official docs for the Cloudwatch extension yet. I start it and see how far I get. Sumo docs could use an update too (no mention of START_FROM).