How many ImageryLayers can we add to imageryLayers of 3DTiles?

In our project we need to add bunch of ImageryLayers into the imageryLayers of 3D Tiles(Google Photorealistic 3D Tiles).
But it seems that there is a limit in number of layers that can be added.
Any techniical explanation would be appreciated.

A detailed explanation may be complex. The actual limitation may depend on very specific hardware details. But unfortunately, the (short) answer for now is that everything is tailored for “roughly one” imagery layer to be draped on 3D Tiles.

This is, at the core, only one bullet point in the follow-ups for the draping, namely the point that “Upsampling is not implemented”.


An attempt to give a very short summary of where this limitation is coming from, what “upsampling” means, and why there is no “upsampling” right now:

  • When draping imagery on 3D tiles, then the geometry has to be rendered with multiple textures being “active” at the same time, because their pixels have to be “mixed together” during rendering. Graphics cards have an upper limit for the number of textures that may be active at the same time. This limit is usually 16.
  • Imagery data usually comes as a “pyramid”, with multiple levels of detail: There is one imagery texture that covers ~“the whole earth”. When zooming closer, this large texture is divided into 4 imagery textures that contain finer details. When zooming closer, there are 4 imagery textures for each of them … down to a level where one imagery texture only covers a small geographic extent.
  • During draping, the implementation computes which imagery textures are required for a given piece of geometry, based on the geographic extent of that geometry. The implementation currently uses an imagery level where it tries to render the geometry with “up to 9” imagery textures at once. (That’s the maximum that is generally possible: If it tried to use the next level, then it would need 9x4=36 textures, but the limit is usually 16)
  • If more imagery layers should be used (or if a higher level of detail of imagery should be used), then it would be necessary to make the geometry smaller. It would be necessary to “cut out” a piece of the geometry, so that this small piece could be rendered with the correspondingly lower number of imagery textures.
  • The process of “cutting geometry into smaller pieces” is what is referred to as “upsampling”. And “cutting geometry into smaller pieces” is (trivial in some way, but) not possible in CesiumJS right now: All the geometry data is uploaded to the GPU memory while the data is being loaded(!), and there is no reasonable way to get this data back to the CPU memory to do any form of (post)processing/subdivision on it

Again: The details are pretty complicated (far more than the given summary suggests). There are several comments in the pull request about that, documenting the time-consuming process of going back and forth about how to deal with this limitation (example, another example - may be hard to digest without the full context). The lack of upsampling is, in turn, only one bullet point in the list of justifications for opening another issue. Efforts here will have to start at a pretty low level.

In theory, one could imagine an approach where one could use up to 4 imager layers, even without upsampling - namely, by using a different level of detail for the imagery. And even though the imagery would then be more coarse and pixelated, it could still be reasonable for certain use-cases. A very quick shot would be to replace the hard-wired 1 in this line with a far lower number - maybe something like 0.05 or so. The goal would be to use a level of detail for the imagery where each geometry only needs up to 4 imagery textures (in the worst case, when the geometry is at a “corner” where these textures meet), so that it could be possible to drape up to 4 imagery layers on the geometry. But this would only be a somewhat shallow workaround, and has not been investigated in detail.