Best Practice for Dynamically Loading 3D Tiles Based on Camera Distance?

Hey Cesium team and community,

I’m currently working on a city-scale visualization where loading all 3D Tiles at once impacts performance. Is there a best practice or example for dynamically loading/unloading 3D Tilesets based on the camera’s distance or zoom level?

Ideally, I want faraway tiles to unload or not render until the user zooms closer. I’ve tried tweaking maximumScreenSpaceError and fogDensity, but wondering if there’s a better approach for dynamic loading control.

Would appreciate any code snippets or links to working examples!

Cheers,
Jhonn Mick

1 Like

Depending on what the client is (CesiumJS, Cesium For Unreal, Cesium For Unity…), this question could probably be moved to the respective subsection for further, more specific optimization hints.

But from the perspective of 3D Tiles itself (and on the level on which it applies to common clients), the mechanisms for balancing the amount of rendered data is aleready built in: Each tile in a 3D Tiles tileset has a ‘geometric error’, which governs the refinement process. The goal is exactly to have a high geometric error for the low-detail tile, and a low geometric error for the high-detail tiles. This geometric error is converted into a screen space error to govern the refinement process. So clients usually already will use the high-detail tiles only when the user zooms in closely.

But of course, there are “overarching” factors - for example, the performance of the machine that the application is running on, maybe the number of tilesets that are displayed, and the structure of the tileset (i.e. whether it has geometric errors that offer the right trade-off).

Usually, the maximumScreenSpaceError is the first point to “tweak” the overall tradeoff between quality and performance. With further information about the client (and maybe the structure and contents of the tilesets), it may be possible to provide mode specific hints.