Here’s the PR that should have fixed the occlusion culling, in case you’re curious:
@Kevin_Ring thanks for the previous help; we resolved those issues with the correct toolchain.
We developed a new mechanism to load tiles at specific tiered distances, clamping the LODs for each distance tier. However, the entire city/county/state still doesn’t load as expected. For instance, from a distance, some buildings appear as pyramids instead of rectangular high-rises. Increasing the quality of distant objects to fix this significantly raises draw calls.
To address this, we are considering using instanced static meshes instead of rendering buildings from the tiles. This approach would place small boxes to create the effect of many buildings loaded in the distance without rendering each one individually. To achieve this, we need to identify if a tile contains geometry or buildings and the texture color so we can match the mesh color with the surrounding tiles.
Can you guide us on whether it’s possible to detect if a tile contains a building, allowing us to place instanced static meshes accordingly?
For instance, from a distance, some buildings appear as pyramids instead of rectangular high-rises. Increasing the quality of distant objects to fix this significantly raises draw calls.
Right. That’s pretty much why the LOD selection algorithm works the way it does. It chooses the lowest level of detail necessary to achieve a particular pixel error.
Can you guide us on whether it’s possible to detect if a tile contains a building, allowing us to place instanced static meshes accordingly?
I mean, there’s certainly nothing built-in to do that. Maybe some tilesets would have metadata that could tell you whether any given triangle is associated with a building, but Google Photorealistic 3D Tiles definitely does not. Beyond that, you’re in the land of heuristics or machine learning or something, and doing an accurate job of that would probably require having a reasonably detailed representation of the city to begin with (that is, not just the low detail tile that represents the buildings as pyramids).
@Kevin_Ring thanks. Responding to both points:
-
One issue we’re encountering is with the current ScreenSpaceError (SSE). It provides more detail to the buildings close to the viewer, which significantly increases draw calls. To address this, we clamped the details for the buildings that are closer to the viewer.
-
We understand that Google Maps doesn’t provide metadata, but we’re exploring if it’s possible to detect building shapes using the vertices within a tile. For example, if the vertex height in a tile is above the average height and the SSE of the tile is not valid, could we identify it as an ideal candidate for placing an InstancedStaticMesh with similar size and color? Meanwhile, we could cache the original tile data. When the tile’s SSE becomes valid (indicating it’s within a valid distance), we could replace the instanced static meshes with the cached tile data.
Additionally, since you’re suggesting pattern matching, could you provide more details on where exactly the geometries and textures for a tile are generated? How do you determine when to increase the LOD for a tile, and where in the process does this occur?
Does this approach seem feasible, and could you provide any guidance or suggestions for implementing it?
We’re noticing significant differences between the rendering results in Windows and Android, despite using the same parameters for the Cesium3DTileset object in both environments.
The image with more details is from Windows. As you can see, there’s an abundance of detail. When I froze the rendering and examined a tile up close, I noticed it doesn’t have any geometry, but it does have a higher quality texture. This suggests that different mipmaps are being used for the tile on different platforms.
In Android, it’s possible that the entire tile has a completely different geometric error, although this seems unlikely. I conducted some tests by adding the following code inside the _meetsSSE function:
if (tile.getGeometricError() < this->_options.MediumQuality)
return true;
This code forces all tiles to remain at the same level of detail. I have other checks that only apply this to a defined range. However, the same code in Android produces different results, indicating that texture mipmaps are changing.
I’m wondering if there’s a way to clamp the mipmaps of the texture. Can anyone provide some guidance on how we should test for mipmaps or what issue we might be facing here?
It provides more detail to the buildings close to the viewer, which significantly increases draw calls.
Yes, but that’s a feature not a bug. Or rather, it’s avoiding doing a similar number of draw calls for buildings that are not close to the viewer, while still providing that detail for nearby objects.
To address this, we clamped the details for the buildings that are closer to the viewer.
This means the detail up close will be lower. I’m surprised that’s acceptable for your application. Usually users want the exact opposite. They want detail up close, but they don’t want to draw things in the distance. My answer to that is that distant objects are already much lower quality, so eliminating them entirely won’t help nearly as much as they think it will.
But if you really want to eliminate detail up close, that will definitely help reduce rendering load by a lot! There’s nothing built in to do this (I don’t think anyone has ever asked for it), but it’s easy enough to do with a code change.
On this line, we stop refinement if the tile has no children:
You just need to add an extra check there:
if (isLeaf(tile) || tile.geometricError < myGeometricErrorThreshold) {
return _renderLeaf(frameState, tile, tilePriority, result);
}
You just need to pick a suitable value for myGeometricErrorThreshold
that gives you the (lack of) detail you want. You can even change it every frame based on camera height if you like.
Additionally, since you’re suggesting pattern matching, could you provide more details on where exactly the geometries and textures for a tile are generated?
I’m not sure I completely understand the question. Geometries and textures are usually loaded, not generated. cesium-native loads models as in-memory glTF objects, which are then passed to the “renderer” (i.e., Unreal) to create appropriate engine resources for rendering. That latter process happens in the UnrealResourcePreparer
class:
How do you determine when to increase the LOD for a tile, and where in the process does this occur?
The Tileset.cpp code I linked above is where cesium-native decides whether to continue traversal to more detailed children.
Does this approach seem feasible, and could you provide any guidance or suggestions for implementing it?
Honestly, I’m skeptical that detecting buildings and swapping in generated ones is a viable approach. But it’s your application not mine, I could easily be wrong.
However, the same code in Android produces different results, indicating that texture mipmaps are changing.
Unreal has lots of settings to control texture quality. It seems likely one of these is reducing the texture quality on you.