General problems developing primitives/shaders on iOS

This isn't a Cesium problem per-se, but this seems to be the best place to ask so please bear with me.

I've been writing a variety of primitives for the Live Earth web app (which uses Cesium) for things like animated traffic, weather, heatmaps, billboards, etc. Generally this has been a pretty smooth process until the point where I test on an iPad pro. Almost every shader has taken longer to get to work on iPad then to actually develop in the first place, and some still don't work on iPad for reasons that are mysterious to me.

Generally the problem manifests as nothing being rendered on the screen, whereas on a windows desktop in Chrome the primitives render just fine.

Here are some of the issues I've encountered so far...

1. Using multiple VertexBuffers in a single VertexArray (one for static, one for dynamic, and one for instance) causes one of the static or dynamic vertex buffers to be ignored and the other to be written to the wrong vertex offsets/attributes.

2. Having an unused vertex attribute in a shader causes that vertex attribute to be optimized away and the buffers then are written to the wrong offsets/attributes (this is only typical during development of the shader)

3. Adding too many uniforms (about 5 more than the automatic Cesium uniforms, I don't remember the exact count) causes the shader to not render even though the number reported by webglreport to be supported is much higher (128 vertex uniforms are supposed to be supported, but about 16 are used in practice even counting for mat4 and arrays using extra slots)

4. ?!?!? Still puzzling out some baffling issues.

The builtin primitives in Cesium seem to work just fine, so I assume someone on your team has worked through a lot of these issues and may be able to give some tips.

Finally, my questions - What other issues have you encountered writing the primitives that ship with Cesium on iOS? What tips do you have that would help narrow down issues faster? Any resources for further reading? I have read http://codeflow.org/entries/2013/feb/22/how-to-write-portable-webgl/ and http://codeflow.org/entries/2014/jun/08/some-issues-with-apples-ios-webgl-implementation/ by the same author.

Shot in the dark, and also not an answer to your question about iOS debugging tips, but is it possible to use Cesium’s existing API for doing lots of time-dynamic things rather than write new Primitives? You mention “multiple VertexBuffers in a single VertexArray (one for static, one for dynamic, and one for instance),” but combining this info with what I’ve seen in the Live Earth demo video, could Cesium’s Billboard Instance Collections be a good fit for this instead of a custom Primitive? It’s in our Development Sandcastle, and works fine on the iPad Pro we have here in the office. Cesium’s ready-made instancing features also have per-instance picking built in too, if that’s what the instance VertexBuffer was for.

But if that’s not the case and BillboardInstanceCollection doesn’t meet your needs, I also wonder if it would help to hint the dynamic VertexBuffer as STREAM_DRAW, if it was DYNAMIC_DRAW before… I think this is what Cesium does for dynamic BillboardInstanceCollections and PointPrimitiveCollections, which again seem to work fine on the hardware we have on hand. On desktop it seems like it also works to just use STATIC_DRAW for everything. Perhaps iOS is much more strict about what will work, and this does seem to be the right case for STREAM_DRAW on the OpenGL wiki’s documentation on buffer use types: https://www.khronos.org/opengl/wiki/Buffer_Object#Buffer_Object_Usage

Less sure about what’s going on with the uniforms error… but maybe Cesium is adding more uniforms than you expect? There’s a bunch of additional shader modification that may be happening under the hood. I still wouldn’t expect that to bump up against the 128 limit, though.

More specifically to your questions on if we’ve encountered problems or tips on debugging, we’ve also had problems with WebGL on iOS, although for me personally it’s mostly been iOS’s depth buffer on newer devices. If you want you can read about the melodrama here. As an aside, it also turns out the precision of what gets written into the depth buffer isn’t nailed down in the spec, so technically the iOS implementation wasn’t “buggy,” just… not what we expected, especially since it was higher on older devices.

But anyway. Narrowing down the issue mostly involved trying to come up with a minimal code example to reproduce my hunch, which lead to better questions that I could search for. This seems like it could be helpful for the Uniforms problem. You could at least confirm or rule out that the WebGL implementation is giving unreliable counts on vertex uniforms.

Let me know if any of that helps!

Sorry for the belated reply, I got some priorities shuffled but this will come back soon.

Replying to the suggestion to use Cesium’s billboard collection: There isn’t feature parity between that and our implementation. For example, we have billboards that can rotate to heading. Additionally, we are seeing about a 2x performance improvement with our implementation. We did work out all the kinks with drill picking, etc.

Replying to the buffer usage hints: I will try that and let you know if that fixes anything, thanks.

Replying to uniform count: I’ve checked the actual uniform counts by both inspecting Cesium’s compiled shaders, and the ShaderProgram manual/automatic uniforms. It seems to break when going from 8 -> 9 uniforms. We might be helped here if Cesium packed some of these. For example, czm_log2NearDistance and czm_log2FarDistance could be a single vec2 to get back 1 more precious uniform. At the moment I’m considering re-compiling the shader on MacOS when one of my uniforms changes… which is horrific.

Is there a way for us to opt-in to more control over the shader compilation, or to disable Cesium’s automatic changes to the shaders? For example, I’d like to try things like manually packing the aformentioned near/far uniforms and see if that fixes our issue. Beyond debugging, there are some cases we want to work around Cesium magic. For example, in one shader we use the depth buffer to perform a stable sort (on a separate depth texture from the globe). The introduction of logarithmic depth broke this because we want uniformly distributed depth values to avoid Z-fighting. So, it would be great to disable logarithmic depth for just this one shader. We had to work around the automatic pick once too, so generally speaking I’d like more control here.

No great conclusion yet, just wanted acknowledge your kind e-mail.