Get total number of successful sign up etc in given interval

Hi there,
I’m trying to get some numbers via the API; specifically: how many requests did we have in a given time interval, grouped by type.

I’m using the endpoint /api/v2/logs
I’ve added ‘include_totals=true’

I realize that the length-field returned reflects the number of records returned per request, so - since I’m not aware of a smarter way - I’m calling the endpoint several times, in order to get to the last page. But it seems that I cannot get past page 20 (1000 entries).

I’m supposing that this is a limitation introduced by design, so my question is: Is there another way to get such statistics, or do I have to break my time intervals into smaller intervals?


Hi @hnje,

Thanks for joining the Community!

It’s possible that your requests are hitting the rate limit for the Management API. You can find those details here:

Have you tried adding the per_page parameter? You can include up to 100 entries per page, so you should be able to request twice as many entries per request.

Also, if there are any log types you are not interested in, you can add a q parameter to filter them out using Lucene Query Syntax

thanks for your inputs, but the rate limits is not the issue here.

My mission is to figure out how many successful/failed logins/signups we had during some specific time intervals.

I’m querying like this, for example:
/api/v2/logs?q=type:s AND date:[2021-01-02T19:30:00 TO 2021-01-02T21:10:00]&include_totals=true

The issue is that I cannot see the total count of entries matching my query even if I set include_totals=true. So I started paging through the results, only to find that there is a limit of 1000 entries even when paging.

We may very well have many thousand entries during these time intervals, so even if I changed my script to get these log-counts in another way, it would end up in thousands of requests… That’s not the right way to resolve this issue.

Do let me know if you have other suggestions :slight_smile:

Oh, I see. After taking another look at the docs, unfortunately, I did find this limitation:

Besides the limitation of 100 log events per request to retrieve logs, you may only paginate through up to 1,000 search results.

The recommendation is to refine the search if you get the 414 Request-URI Too Large error

If you go to your tenant settings and go to Advanced and scroll down to Migration, do you see “Legacy Logs Search V2”? If so, you can toggle that off to opt into Tenant Logs Search Engine v3 which would likely help.

I will have to check and see if this could be the issue though.

Yeah, I looked for it, and that option is not available, unfortunately.

But why would you remove the actual total counts from the response? Clearly, that’s a significant drawback. What do other users say to this?

If the option is not in your settings, then it means that you are already migrated to v3, so I believe that is not the issue. It looks like the 1000 limitation still applies. I will do some more research on this and let you know what I find!

1 Like

It looks like many users who face this limitation will offload logs to another service using a logging extension. I’ll keep looking for solutions for the Management API, but it does look like there is a 1,000 entry limit, unfortunately.

ideally, I would like a way to just query for the numbers. I’m not interested in the individual log entries in this case, so that would just be a waste of bandwidth.

I would expect to be able to find such numbers using the stats endpoint, for example.

UPDATE: Another solution, inspired by your suggestion about logging services, could be the ability to export log entries as a sort of background job.

1 Like

I agree that a stats endpoint for logs would be very helpful. It’d be great to get this as a feature request for our product team. You can submit one here:

That sounds like a promising idea about exporting the logs!

That sounds like a promising idea about exporting the logs!

Yeah, but from what I can see, it’s not an option…?

You may have already looked into this option, but there is log streaming which allows you to send logs to export logs to a given URL or service:

Does that look like what you had in mind by exporting as a background job?

What I meant was that it would be nice if we had the option to export logs to a file.

Although I haven’t used this specific feature, it could possibly be something like this - just for log entries:

Oh, I see. Yes, that is correct. We don’t have a bulk export job for logs at the moment. I believe the current options include:

Log streams would be the best option for the task because of the number of log entries. This option would require setting up an external service or using your own API.

Unfortunately, we don’t have a way to query these stats directly at the moment.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.