Hey folks, I just wanted to give an update on the implementation of streaming terrain rendering in Cesium.
Scott Hunter and I have combined his imagery layers effort and my streaming terrain effort into one effort, since there’s so much overlap between the two, and we’re making great progress on both. Take a look at the attached screen shot, which was generated today using the version of Cesium in the ‘imagery_layers’ branch. Not too bad, right? The imagery and terrain in the screen shot are both from ESRI and streamed from their public servers.
Ok, so I carefully took that screen shot so as to hide many of the remaining rendering problems. but still it’s coming along nicely.
With some of the foundations in place, we’re quickly getting to the point where we’d love to see more folks get involved in the development. In particular, I’d love to see someone implement a “TerrainProvider” to stream terrain from a WMS server like GeoServer instead of from ESRI. Any takers? With any luck, it shouldn’t be too difficult.
Here are some (but probably not all) of the problems and things remaining to be done:
-
The ESRI terrain is not georeferenced quite right. The tiling scheme is based on a web mercator projection, but heights within each tile are incorrectly mapped to vertices using a geographic projection.
-
The system is architected to allow imagery to be tiled differently from terrain, and for different imagery sources to use different tiling schemes. However, the code to support that is not done yet, so things will currently go very badly if terrain and imagery tiles are not aligned.
-
Tile culling is overzealous in some cases (see the black spot in the lower-right of the screen shot) and not aggressive enough in other cases (too many tiles over the horizon).
-
The frame rate is not what it should be. I suspect it’s a combination of less than ideal culling (see above), rendering more terrain detail than needed (the tile selection algorithm needs tweaking), and an inefficient fragment shader.
-
Cracks between different levels of detail are visible in some cases. We’re still using the screen space, gaussian-blur-based crack filling technique that was used pre-terrain, but sometimes the cracks are too big. We may be able to use the same technique with a bigger kernel, or we may end up dropping skirts from the tiles.
-
We’re using a LOT of GPU memory. There are probably multiple factors here.
-
The ESRI elevation data is accessed using a token that expires every day. I’ve been refreshing it daily, but we need a better solution.
Let me know if you try it out in the imagery_layers branch, and if you’d like to get involved.
Thanks!
Kevin
I took this for a spin, and it looks really good so far. I’m probably just telling you things you already know, but:
- Draw calls
- For the default view, the number of draw calls is reasonable - so culling and LOD is working here. In WebGL Inspector, I noticed that a ton of textures were loaded. Are these prefetched? If so, what is the prefecthing algorithm?
- For a zoomed-way-in-straight-down view, the number of draw calls is also reasonable.
- As you mentioned, for a horizon view, I think too many tiles are rendered - both those that should be culled, and those who should not have been selected by LOD refinement. Also, horizon views are screaming for anisotropic filtering. I already implemented the extension in Cesium, and we used it pre-terrain (and now I assume). It is in Canary, but not Chrome stable.
- Memory
- What is the replacement policy? With a straight-down view if I zoom in close, zoom out, and then zoom back in, the tiles weren’t cached. Did I thrash?
- My GPU process was 500-850 MB most of the time, and sometimes spiked to 1.1 GB.
- In Chrome stable and canary, I locked up the tab a few times. Most of the time WebGL Inspector was also open.
- What is the cause of the current latency? Slow servers? Local processing - is anything expensive not on a webworker? Our proxy? A combination?
- Sometimes the cracks are huge - 100x100s of pixels. Our blur technique is cool, but it’s not going to fill those. Perhaps when latency is minimized, the cracks will be much smaller.
Patrick
- For the default view, the number of draw calls is reasonable - so culling and LOD is working here. In WebGL Inspector, I noticed that a ton of textures were loaded. Are these prefetched? If so, what is the prefecthing algorithm?
There’s currently no prefetching. I’m actually surprised to hear you say the number of draw calls is reasonable for the default view.
- For a zoomed-way-in-straight-down view, the number of draw calls is also reasonable.
- As you mentioned, for a horizon view, I think too many tiles are rendered - both those that should be culled, and those who should not have been selected by LOD refinement. Also, horizon views are screaming for anisotropic filtering. I already implemented the extension in Cesium, and we used it pre-terrain (and now I assume). It is in Canary, but not Chrome stable.
If it was on before I think it should still be on. Did you observe it to be not working? I haven’t tried running in Canary.
- Memory
- What is the replacement policy? With a straight-down view if I zoom in close, zoom out, and then zoom back in, the tiles weren’t cached. Did I thrash?
You did, even though you shouldn’t have. The tile cache is currently limited to 100 tiles or however many were visited (not necessarily rendered) in the current frame, whichever is greater. For zoomed-in, horizon views, more than 100 tiles are pretty much always visited. There are several issues here that we need to think through - I’m not really prepared to try to describe them right now.
- My GPU process was 500-850 MB most of the time, and sometimes spiked to 1.1 GB.
- In Chrome stable and canary, I locked up the tab a few times. Most of the time WebGL Inspector was also open.
Yep, memory usage is huge. Sounds like you’re seeing memory usage a bit higher than I am, but not by much. Not sure about the locking up, I haven’t seen that. Can you reproduce it without WebGL Inspector open?
- What is the cause of the current latency? Slow servers? Local processing - is anything expensive not on a webworker? Our proxy? A combination?
Yes, all of the above. I think the bulk of it is ESRI’s terrain server, because it’s not exactly designed for this use-case. The elevation data is not tiled and cached, but instead each request invokes server-side processing. There’s definitely some latency introduced by our TIFF->PNG proxy as well. I suspect the client-side latency is small, but I haven’t measured it. Remember, though, that web workers don’t reduce latency. They actually increase it a bit. However (and very importantly) they let us keep rendering while doing time-consuming work.
- Sometimes the cracks are huge - 100x100s of pixels. Our blur technique is cool, but it’s not going to fill those. Perhaps when latency is minimized, the cracks will be much smaller.
Oops, I just realized I forgot to push my last round of changes that made the terrain tiles actually line up correctly. The cracks are much, much smaller now, so give it another try - the experience should be vastly better.
In any case, I agree we won’t be able to guarantee screen-space filling of the cracks across arbitrary LODs and without actual geometric error data. Maybe “usually” filling the cracks is good enough. Or maybe we need skirts. Probably the latter.
Thanks for the comments!
Kevin
Comments:
There’s currently no prefetching. I’m actually surprised to hear you say the number of draw calls is reasonable for the default view.
On closer analysis, you’re right. For a 1920x1019 framebuffer, where say the globe takes up half horizontally, we are rendering 25-30 tiles.
If it was on before I think it should still be on. Did you observe it to be not working? I haven’t tried running in Canary.
I believe it’s still on.
There are several issues here that we need to think through - I’m not really prepared to try to describe them right now.
I noticed that the cache empties when I go under the globe. I also got into a situation that I can’t reproduce where two of four children were rendered, and the others were black.
Not sure about the locking up, I haven’t seen that. Can you reproduce it without WebGL Inspector open?
Not in Canary, at least. I didn’t try stable.
The cracks are much, much smaller now, so give it another try - the experience should be vastly better.
Looks good.
Patrick
i really like this..can you send me step by step process.otherwise tell me how to make layer.json file