Hello! I’ve received some requests about the differences in the number of tiles loaded between Cesium and the 3DTilesRendererJS project and wanted to clarify what the “right” thing to do is. I’ve made some comparison projects and have confirmed that Cesium is, indeed, loading fewer tiles in a tile set with an “ADD” refinement root and 256 child tiles provided by an end user – specifically this tile set structured like so:
root
refine=ADD
geometricError=500
└ 256x children
geometricError=1
The section on “refinement” in the 3D Tiles specification states the following:
If the tile has replacement refinement, the children tiles are rendered in place of the parent, that is, the parent tile is no longer rendered. If the tile has additive refinement, the children are rendered in addition to the parent tile.
And in the geometric error section:
If the introduced SSE exceeds the maximum allowed, then the tile is refined and its children are considered for rendering.
My reading of this is that if an “ADD” refinement tile is “refined” then its children should render (barring any frustum or visibility culling, etc). However when a child of an “ADD” refinement tile is encountered in Cesium it seems to use the parent tiles geometricError value to calculate the child tile’s screen space error with the child’s bounding box and determine it to be visible or not. However it’s not clear from the comments of the code why this is done and is not following the described purpose of geometric error in the specification.
So my question is - what’s the rationale behind using the parent’s geometric error this way? And how can this behavior be reconciled with the spec? What’s the right thing to do? I’m hoping there’s a “canonical” correct rendering of a tile set so a loaded model can be reliably rendered across applications.
Thank you!