The potential load shedding error can be thrown by making too many requests to the Log Server, thus blocking the query or that the query took a long time and caused the service to restart. In the latter case, you can try running the query again after 5-10 minutes and it should be working, if it does not, try making fewer requests.
If you have any other questions, feel free to leave a reply!
Hello @nik.baleca
Even without making too many requests, it is still throwing the same error. For example, if I wait for an hour and then hit the API for page 11 and above, it still throws the same error. However, hitting the API for any page from 0 to 10 returns a successful response.
The error specifically occurs on the 11th page and above with a page size of 100 (page 101 and above if page size is 10 instead of 100). There are no issues for pages 0 to 10 (page 0-100 in case of page size of 10), but since the error occurs on page 11 and above, we are missing out on the remaining data.
note that we’re strictly following the rate limit for the api.
I have been facing this error for every tenant. I have sent one of the tenant in DM for it.
This error seems to have a pattern that api throws error whenever page_size multiplied by page_number exceeds ~10,000.
I could not find any issues when I have inspected your tenant. However, I was able to reproduce the error on my end regarding the log query.
The issue relies in the fact that you are exceeding the 1000 record limit for the query (per_page multiplied by page). In your case, it would be 11 multiplied by 100 which exceeds the specified limit, that is why your query attempt receives the potential load shedding error. In this case, you would need to refine your search to make sure you to not pass this limit when retrieving logs.