Dark models when Chrome uses OpenGL rather than ANGLE

Hi,

If I run the Cesium Sandcastle 3D Models example on an embedded version of Chromium (Part of QtWebEngine) using the OpenGL renderer rather than Angle, models appear too dark:

Any idea where the problem might be? Cesium / Qt / Chrome etc. I’ve tried a few other WebGL apps and they don’t appear different. It looks OK when ANGLE is used.

Thanks.

The Lighting sandcastle also shows the problem with “Fixed lighting”

But the “Flashlight” lighting looks much more similar:

3d Tiles are dark too.

In the textureIBL() shader it is calculating the diffuse contribution as 0.

This seems to be because model_sphericalHarmonicCoefficients contains all zeros (which can be seen by printing out environmentMapManager.sphericalHarmonicCoefficients in ImageBasedLightingPipelineStage.process).

Tracing further back, it looks like _irradianceMapTexture which the coeffs come from is incorrectly computed by the ComputeIrradianceFS shader, on this line:

vec4 color = czm_textureCube(u_radianceMap, lookupDirection, 0.0);

The values sampled from the texture here don’t seem to correspond to what they are if I look at them in updateIrradianceResources() before the shader is run.

I might be lacking some context, but looking at the implementation of czm_textureCube, it takes different paths:

  • It depends on the __VERSION__, which, I think, should be 300 in both of your cases, but that remains to be confirmed
  • It depends on the presence of some GL_EXT_shader_texture_lod extension (which can be checked with https://webglreport.com/ - it seems to be called EXT_shader_texture_lod for me, but maybe that’s some sort of convention)
  • If neither is supported, then it’s missing a return value, meaning that the compilation should probably fail there… (?)

Thanks @Marco13.

__VERSION__ is indeed 300, so it is executing:

 return textureLod(sampler, p, lod);

But that seems to be returning the wrong value. (I’ve tested that by replacing it with some code that just returns a constant, and then the model is lit correctly). So I don’t think it’s a problem with czm_textureCube(). I also changed the p input to that function with a constant, to rule a problem with the coordinate calculation.

I’d guess it’s something to do with it being a texture cube rather than a regular texture, but don’t know much about GL.

Found this warning:

js: [.WebGL-0000015610F74950]RENDER WARNING: texture bound to texture unit 0 is not renderable. It might be non-power-of-2 or have incompatible texture filtering (maybe)?

Which explains why the texture access fails.

This is being caused in CubeMap.js setupSampler() by:

gl.texParameteri(target, gl.TEXTURE_MIN_FILTER, minificationFilter);

Where minificationFilter is LINEAR_MIPMAP_LINEAR.

If I change minificationFilter to LINEAR or NEAREST then it works.

Thanks for the detailed investigations here @srce !

It sounds like this could be enough to not only open an issue, but also describe the proposed solution (i.e. what has to be done in a PR in order to solve this). And this PR might even be simple, if changing the minification filter does not have undesired side-effects.

Ping @jjhembd and @Gabby_Getz for awareness

Ok. I’d say it’s possibly more of a workaround around, rather than a fix. From what I can see, it looks like it does generate mipmaps beforehand, so not sure why it shouldn’t work.

Also tried in on an laptop with Intel graphics, rather than nVidia, and that showed the same problem.

Hi @srce, I’m not familiar with QtWebEngine. But based on the error you are getting, it sounds like it is not actually using WebGL2.

It might be non-power-of-2 or have incompatible texture filtering

non-power-of-2 textures are supported in WebGL2.

Can you verify if QtWebEngine actually supports WebGL2?

Hi @jjhembd,

QtWebEngine uses Chrome, so does support WebGL2. WebGL Report says WebGL 2 is supported.

I tracked that specific error message down in to the Chrome source, and the texture is a power of two. It seems to be the mipmapping that is the problem (thus the texture filtering part of the warning and disabling mipmapping as above makes it work).

Digging further in to the Chrome source, the problem occurs in Texture::CanRenderWithSampler() - the code checks that all the generated mipmaps have the same internal_format as the base level texture. However, this check fails because the base level texture has an internal_format of 6408 (gl.RGBA), but the mipmaps have an internal_format of 32856 (gl.RGBA8).

Internal format is set to gl.RGBA for the base texture in the Cesium source in CubeMap.js:CubeMap(). When generating the mipmaps, there doesn’t appear to be a parameter to set the internal format. Chrome appears to call glGenerateMipmapEXT() to generate them.

If I change internalFormat in CubeMap.js to gl.RGBA8 - then mipmapping works and the models are lit correctly.

What the bug is I don’t know. Should Cesium use gl.RGBA8? It doesn’t seem to be required according to the WebGL spec. Should Chrome treat the two formats as the same in Texture::CanRenderWithSampler()? Or is it a driver bug in glGenerateMipmapEXT()? Probably a question for the Chromium developers, thanks.