Logged in user null Auth0/Spark Java

Hello All,
I am trying to get Auth0 integrated into my web app which uses the spark-java framework. The problem is while the authentication works perfectly, including the callback(I see the new user created on Auth0’s website and my website gets redirected), I can’t access the logged in user info. I’ve tried several methods like SessionUtils.getAuth0User(request.raw()) and none of them are working. For example in the provided tutorial here: auth0-servlet-sample/01-Login at master · auth0-samples/auth0-servlet-sample · GitHub they access the logged in user info like so:
@Override
protected void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
final Auth0User user = SessionUtils.getAuth0User(req);
if (user != null) {
req.setAttribute(“user”, user);
}
req.getRequestDispatcher(“/WEB-INF/jsp/home.jsp”).forward(req, res);
}
I’ve tried doing something similar with Spark but since the get works a bit differently in Spark I do this:

port(Integer.valueOf(System.getenv(“PORT”)));
staticFileLocation(“/spark/template/freemarker”);
String clientId = System.getenv(“AUTH0_CLIENT_ID”);
String clientDomain = System.getenv(“AUTH0_DOMAIN”);
get(“/”, (request, response) →
{
Map<String, Object> attributes = new HashMap<>();
Auth0User user = SessionUtils.getAuth0User(request.raw());
if(user != null) {
attributes.put(“user”, user);
attributes.put(“loggedIn” , true);
}
else
attributes.put(“loggedIn” , false);
attributes.put(“clientId” , clientId);
attributes.put(“clientDomain” , clientDomain);
return new ModelAndView(attributes, “index.ftl”);
}, new FreeMarkerEngine());
The code is always reporting the user as null even though the user is created and stored in the database and the signin works properly with no runtime or console errors. The other methods I tried I replaced the line where I set the user variable and wrote the following.
Alternate Method 1:
Auth0User user = (Auth0User) request.session().attribute(“auth0User”);
Here auth0User is the same string literal Auth0 uses in their implementation of SessionUtils as shown in their source code referenced here: auth0-java-mvc-common/SessionUtils.java at master · auth0/auth0-java-mvc-common · GitHub | Cassandra

Alternate Method 2:
Auth0User user = (Auth0User) request.raw().getUserPrincipal();

In addition this is my javascript code running client side for the authentication:

var lock = new Auth0Lock('${clientId}', '${clientDomain}', {
    auth: {
        redirectUrl: 'http://localhost:5000/build',
        responseType: 'code',
        params: {
        scope: 'openid user_id name nickname email picture'
        }
    }
});

$(document).ready(function()
{
    $('.signup').click(function()
    {
        doSignup();
    });
});

function doSignup() {
    lock.show();
}

I have no idea why user is being evaluated to null every time and I would love some feedback on what I’m doing wrong. Thanks.

1 Like

Sorry to bring a slightly older post back from the grave but I’m having the exact same issue and I think I understand the cause of it, but not how to resolve it within my application.

My application also uses the spark java framework and auth0, and I authenticate users via an ADFS server. When the scope parameters are returned to the callback URL, I can see the parameters (the token) in the URL. When I manually copy the token out of the URL and paste it into the debugger at jwt.io everything checks out okay.

I believe the issue has to do with an appended # at the beginning of the URL. All of my other spark routes can be parsed for parameters appropriately and do not contain a # at the beginning of the URL. Does anyone know of a good way to account for this within my spark application or within the HTML code that makes up my web pages?

Just checking in. Does anyone know how to modify the returned hash prefix using the auth0 Java libraries or is there a way to account for the # prefix using the Spark Java framework?

Hey there!

Sorry for such delay in response! We’re doing our best in providing the best developer support experience out there, but sometimes the number of incoming questions is just too big for our bandwidth. Sorry for such inconvenience!

Do you still require further assistance from us?