Implicit tiling - override TILE_GEOMETRIC_ERROR semantic?

Hi, for a minimal case ( 1 GLB cube) I’m trying to override the geometric error defined in tileset.json (value = 500) by a much smaller value defined in the 0.0.0.subtree file (value = 1). Expected behaviour in Cesium is that the box is only visible when zooming in (because it uses Geometric Error = 1 from the subtree file and not 500 from the tileset.json file) .

For getting this working I’ve created tileset.json, subtree file 0.0.0.json and file metadata.bin:

 "schema": {
    "id":"first",
    "classes": {
      "tile": {
        "properties": {
          "geometricError": {
            "semantic": "TILE_GEOMETRIC_ERROR",
            "type": "SCALAR",
            "componentType": "FLOAT64"
          }
        }
      }
    }
  },
  "buffers": [
    {
      "uri": "metadata.bin",
      "byteLength": 8
    }
  ],
  "bufferViews": [
    {
      "buffer": 0,
      "byteOffset": 0,
      "byteLength": 8
    }
  ],
  "propertyTables": [
    {
      "count": 1,
      "class": "tile",
      "properties": {
        "geometricError": {
          "values": 0
        }
      }
    }
  ],
  "tileMetadata": 0
  • File metadata.bin contains double binary value of ‘1’.

The tileset is validated with 3d-tiles-validator, no errors are found.

Demo see https://bertt.github.io/cesium_3dtiles_samples/samples/1.1/implicit_semantics/geometric_error_issue/

Some observations:

  • Cesium viewer still uses geometric error = 500 :-(, box is still visible after zoom out;

  • The debugShowGeometricError label shows both geometric error values (500 and 1) overlapping;

  • Tile metadata popup shows semantics ‘geometricError = 1’ on mouse move over.

Question how to fix this so Cesium uses only geometricError = 1 (so the box is only visible on zoomin?). Or maybe this overriding of the automatically calculated geometric error by implicit tiling does not work with only root level (so at least 2 levels are needed)?

Or maybe this overriding of the automatically calculated geometric error by implicit tiling does not work with only root level (so at least 2 levels are needed)?

This might be true. When an explicit tile (in the tileset JSON) defines an implicitTiling, then one could argue about the structure that this represents. It could be this…

       root
        |
        |
   explicit tile (with implicitTiling)
        |
        |
  implicit tiling root (a single child of the explicit tile)
   |    |    |    |
   |    |    |    |
  c0   c1   c2   c3  (4 children, for a quadtree)

But one could also make a case for the “implicit tiling root” being a direct child of the root - thus, basically replacing the “explicit tile” that defined the implicitTiling.

I think (!) that CesiumJS uses the structure that is pictured above - and usually, that makes a lot of sense from the implementation perspective. But it might be that in CesiumJS the “implicit tiling root” is always displayed when the “explicit tile” is displayed - so there is no way to hide the “implicit tiling root” by setting a certain geometric error, because the decision of whether it is displayed only depends on the geometric error of the “explicit tile”.

Disclaimers:

  • There are some gueses involved in that. I’ll have to read the CesiumJS code more thoroughly
  • There might be a bug in CesiumJS on top of that…

I think that there might be a bug, because I did some experiments. The goal was to have an implicit tileset with “very high” geometric errors at the root, and a sudden drop-off of the geometric error to 1.0 (to prevent deeper children from being rendered). But I received some unexpected results there. The data set that I used for these tests, and a description of the observations, can be found in Possible error in computation of geometric error for implicit tilesets · Issue #11174 · CesiumGS/cesium · GitHub

In any case: Your tileset seems to be valid, and when traversing it with a standalone tool, it says that the geometric error is properly overridden:

Traversing tileset
  Traversed tile: ExplicitTraversedTile, level 0, path /root, geometricError 500 overridden to be 500
  Traversed tile: ImplicitTraversedTile, level 1, global: (level 0, (0,0)), root: (level 0, (0,0)), local: (level 0, (0,0)), geometricError 500 overridden to be 1
1 Like

Ok that’s a nice example you’ve created, there was no sample available as far as I know. Might be a good idea to add to the 3d-tiles-samples when it’s working alright.

My usecase is to change the geometric errors in a way that the tiles at higher z-levels have the same geometric error as tiles at lower z-levels in a quadtree with implicit tiling. So the rule that the geometric error are divided by two on each z-level does not apply (it’s a little confusing to some users). I think this overriding of TILE_GEOMETRIC_ERROR is the way to achieve this. If there is a more simple method let me know.

The example for now was focussed on this issue. The task to create a nice example that shows metadata in subtrees is still on my TODO list. There should probably be one that is similar to the TilesetWithFullMetadata example (as a stress test), and maybe a simpler, more visual example, and one that covers one (or all?) of the ‘semantics’ in a reasonable way. When these are available, they will be added to the 3d-tiles-samples repo.

(But… I have to revamp some of my local infrastructure that I’m using for creating these. The example that was added to the issue involved quite some “hacks”, with things like if (XXX_METADATA_EXPERIMENTS_GEOMETRIC_ERROR) { /* add stuff */ … }. I’m trying to wrap a few somewhat object-oriented wrappers around all that, but that takes some time…)

My usecase is to change the geometric errors in a way that the tiles at higher z-levels have the same geometric error as tiles at lower z-levels in a quadtree with implicit tiling. So the rule that the geometric error are divided by two on each z-level does not apply (it’s a little confusing to some users). I think this overriding of TILE_GEOMETRIC_ERROR is the way to achieve this. If there is a more simple method let me know.

Yes, the TILE_GEOMETRIC_ERROR semantic is the only way to override the default computation of the geometric error.

I’ve been thinking about that rule as well. On the one hand, one could argue that the geometricError of child tiles should be 0.25 of their parent for quadtrees, and 0.125 of their parent for octrees. But beyond that, there’s the question of what should be configurable, and how. One generalization would be to have something like a geometricErrorFactor somewhere, with 0.5 (or 0.25 or 0.125) being the default (and which would be 1.0 in your case). More fine grained approaches could be to have something like a geometricErrorPerLevel = [100, 50, 2, 1] or so.

With the TILE_GEOMETRIC_ERROR semantic, the geometric error is defined for each tile individually. This is somewhat cumbersome - compared to just defaulting to the constant factor of 0.5 for each level - but it is eventually the most fine-grained way to define it.

But… I’m a bit curious about the intention behind making the geometric error constant. This means that when the implicit tileset is loaded, all its data is loaded, immediately, with the highest level of detail. This might make sense for some (small/test) scenarios. But one should be careful to not store “the whole world, with all its complexity” in such a tileset, triggering it to be loaded all at once…

You can see the confusing effect here https://bertt.github.io/cesium_3dtiles_samples/samples/1.1/delaware/

In this sample I’ve used maximum 1000 buildings per tile, resulting in higher z-levels in urban areas and lower z-levels in rural areas. User can get the impression there is nothing in the urban area, but must zoom in more to show the tiles with higher z-levels.

I think it’s more predictable (in this case) when the display of the tiles is only depended of the distance to camera/geometricError (and not also on the Z level of the tile).

It would be nice if there is a parameter (in CesiumJS?) like geometricErrorFactor.

I see, that can be confusing. The circled area that is shown in the screenshot is level 2 of the tree. But only level 3 contains geometry data.

The intention behind this behavior is to support level of detail: You’d basically try to find a low-detail representation of the geometry that will be shown in level 3, and put this low-detail representation into level 2, and even lower detail in level 1, and even lower detail in level 0.

For level 0 (the whole city), the geometry could probably just be a single polygon of the city outline. On level 3, it will be the buildings that you already have. But I see that it could be hard to find sensible “lower-detail representations” for the steps in-between…

(Maybe this could be some sort of “convex hull” of all the buildings in that area? Or just 2D polygons of the building shapes (not extruded to 3D)? - it’s hard to imagine what these approaches would “look and feel” like when zooming and panning…)

If the intention is to either show no geometry or all geometry (at the highest level of detail), then setting a constant geometric error via the semantic is one (and I think the only) way to achieve that.

But again: This means that when the user zooms in, then it suddenly tries to load all B3DM files at once. The data set here may still be small enough for this to not be too critical, but one should be aware of what it means when this approach is applied with “many” buildings, and the buildings are more complex, with detailed geometry and textures - maybe with 100000 buildings where each building is a several-MB B3DM file…

1 Like