How does tiling and zoom work with WebGL and a sphere?

I'm very curious and just getting started in WebGL.

Flat implementations outside of WebGL just add images to a div container and emulate zoom...

One thing I noticed making my own WebGL sphere and camera, is that once i start zooming in on the sphere the front disappears pretty early. Also, tiling seems like a tough challenge with no documentation I can find on the subject. Must be something that is more documented from a game developers side.

The codebase is also very large at this point, so I figured i could get some guidance on where to look and what techniques are used.

Thanks,
Craig.

is that once i start zooming in on the sphere the front disappears pretty early.

That sounds like an issue with the near plane not being close enough to the camera. But also if you’re dealing with very large geometry or vast ranges, like global scale, a multi-frustum of logarithmic depth buffer setup would be needed to resolve these issues. We had a blog post on that:

For tiling, what kind of tiling do you mean? Like making your own system to render imagery tiles, or 3D Tiles, or taking source data and converting it to tiles? In any case, you might be interested in the series of blog posts tagged “WebGL” if you’re trying to understand the Cesium codebase:

https://cesium.com/blog/categories/webgl/

Best of luck! As you do this deep dive, if you notice any bugs or improvements you can make, making a contribution would be awesome!

Hey Omar! Thanks for the response! Both of those links look really interesting an I'll start reading right away.

I guess what I am saying is how do you render the imagery tiles at different zooms?

Like when the camera gets closer to the sphere, you redraw the new zoom tiles... is there a common practice for this? Id assume video games do something similar but its different for mapping since you just generate the new tiles, while games just increase the complexity of the geometry.. at least I think that is how it goes.

Correct me if I am wrong, but the sphere is drawn with triangles (per any shape using webgl) and the density of the triangles is what helps it appear as spherical. Then when you take a tile in view, do you break it into pieces and fit all those pieces into the triangles? Or do you lay it on top, and apply a curve on the image?

I think what you’re asking is the general question of how do you map a texture to geometry. Two triangles make a quadrilateral, and a sphere can be made from a grid of these. In the vertex shader the texture coordinates can be set up so that the given image renders across the quad being drawn.

I think what CesiumJS does when you zoom in and it loads new tiles is also load new geometry, so I don’t think there’s anything special that it does there that a video game engine wouldn’t do.