Hi there,
I’m trying to get some numbers via the API; specifically: how many requests did we have in a given time interval, grouped by type.
I’m using the endpoint /api/v2/logs
I’ve added ‘include_totals=true’
I realize that the length-field returned reflects the number of records returned per request, so - since I’m not aware of a smarter way - I’m calling the endpoint several times, in order to get to the last page. But it seems that I cannot get past page 20 (1000 entries).
I’m supposing that this is a limitation introduced by design, so my question is: Is there another way to get such statistics, or do I have to break my time intervals into smaller intervals?
It’s possible that your requests are hitting the rate limit for the Management API. You can find those details here: Management API Endpoint Rate Limits
Have you tried adding the per_page parameter? You can include up to 100 entries per page, so you should be able to request twice as many entries per request.
Also, if there are any log types you are not interested in, you can add a q parameter to filter them out using Lucene Query Syntax
Hi,
thanks for your inputs, but the rate limits is not the issue here.
My mission is to figure out how many successful/failed logins/signups we had during some specific time intervals.
I’m querying like this, for example:
/api/v2/logs?q=type:s AND date:[2021-01-02T19:30:00 TO 2021-01-02T21:10:00]&include_totals=true
The issue is that I cannot see the total count of entries matching my query even if I set include_totals=true. So I started paging through the results, only to find that there is a limit of 1000 entries even when paging.
We may very well have many thousand entries during these time intervals, so even if I changed my script to get these log-counts in another way, it would end up in thousands of requests… That’s not the right way to resolve this issue.
If you go to your tenant settings and go to Advanced and scroll down to Migration, do you see “Legacy Logs Search V2”? If so, you can toggle that off to opt into Tenant Logs Search Engine v3 which would likely help. Migrate to Tenant Log Search v3
I will have to check and see if this could be the issue though.
If the option is not in your settings, then it means that you are already migrated to v3, so I believe that is not the issue. It looks like the 1000 limitation still applies. I will do some more research on this and let you know what I find!
It looks like many users who face this limitation will offload logs to another service using a logging extension. I’ll keep looking for solutions for the Management API, but it does look like there is a 1,000 entry limit, unfortunately.
Thanks,
ideally, I would like a way to just query for the numbers. I’m not interested in the individual log entries in this case, so that would just be a waste of bandwidth.
I would expect to be able to find such numbers using the stats endpoint, for example.
UPDATE: Another solution, inspired by your suggestion about logging services, could be the ability to export log entries as a sort of background job.
You may have already looked into this option, but there is log streaming which allows you to send logs to export logs to a given URL or service: Log Streams
Does that look like what you had in mind by exporting as a background job?
Log streams would be the best option for the task because of the number of log entries. This option would require setting up an external service or using your own API.
Unfortunately, we don’t have a way to query these stats directly at the moment.