Optimize 3DTiles for different MaximumScreenSpaceError value

Hi.
We use Bentley’s ContextCapture software to produce photorealistic 3DTiles models for mapping purposes, and we recently came to realize that for some reason CC uses a different MaximumScreenSpaceError value for its models - CC models are optimized for MSSE of 1 while most other photogrammetry software uses the default value of 16. This causes CC models to load incorrectly when viewed along side other models or using viewers optimized for MSSE = 16.

We already contacted Bentley’s support about this issue and it will be fixed in an “upcoming” release, but we already have a large library of models we would like to use, preferably without waiting for some future CC version.

Is there a way to convert a 3DTiles model optimized for one value of MSSE to a different value of MSSE?

When you are talking about different MaximumScreenSpaceError values, then I assume that the difference that you are talking about is actually in the geometricError values of the tilesets. The geometricError is used as a basis for computing the screen space error, and the maximumScreenSpaceError is only the threshold at which the viewer should try to refine the model, by loading tiles with a lower geometricError (and therefore, tiles that cause a lower sceen space error).

That said: There is no common scale for the geometricError. The values for the geometricError highly depend on the nature of the data - for example, whether it is a CAD model, or terrain data, or something that is not even ‘mesh data’, like a point cloud, or a simple mesh where the main difference in the levels of detail is in the resolution of the texture that is applied to the mesh. And it is true that the same 3D model could be converted into 3D Tiles by two different tools, where one assigns geometric error values between 100 and 1000, and the other one assigns geometric error values between 1 and 10.

In CesiumJS, you can select different values for the maximumScreenSpaceError for each tileset. As an example: The TilesetWithDiscreteLOD contains a model with three levels of detail, with geometric errors being 100, 10, and 0.

When this tileset is added to a viewer twice, once with a maximumScreenSpaceError of 1 and once with 512, then you can see the difference in the refinement behavior: For the first one, the highest level of detail is loaded immediately. For the second one, the highest level is only loaded when zooming in very closely.

Cesium GE

There is no silver bullet for this value, and no approach to “make it universally correct, all the time”. But the tileset-specific maximumScreenSpaceError can be used as a “steering factor” for tilesets with different magnitudes of geometricError values.

Is there a way to convert a 3DTiles model optimized for one value of MSSE to a different value of MSSE?

There is no tool that can do this automatically. However, it would boil down to just traversing the tileset, and adjusting the geometricError value (for example, scaling it by a certain factor). We can definitely consider to add such a functionality in the 3d-tiles-tools is there is broader demand for that.


The sandcastle for the comparison, just in case someone wants to try it out:


const viewer = new Cesium.Viewer("cesiumContainer");

async function createTileset(offsetX, msse) { 
  const tileset = await Cesium.Cesium3DTileset.fromUrl(
    "http://localhost:8003/tileset.json", {
    debugShowBoundingVolume: true,
    debugShowGeometricError: true,
    maximumScreenSpaceError: msse
  });
  viewer.scene.primitives.add(tileset);

  const translationMatrix = Cesium.Matrix4.fromTranslation(
    new Cesium.Cartesian3(offsetX, 0, 0),
    new Cesium.Matrix4()
  );
  const modelMatrix = Cesium.Matrix4.multiply(
    translationMatrix,
    tileset.modelMatrix,
    new Cesium.Matrix4()
  );
  tileset.modelMatrix = modelMatrix;
  return tileset;
}

try {
  const tilesetA = createTileset(0, 1);
  const tilesetB = createTileset(1500, 128);
  
  const offset = new Cesium.HeadingPitchRange(
    Cesium.Math.toRadians(22.5),
    Cesium.Math.toRadians(-22.5),
    3000.0
  );
  viewer.zoomTo(tilesetB, offset);
} catch (error) {
  console.log(`Failed to load tileset: ${error}`);
}

1 Like

Thanks for the information.

I wrote a little python script to scan through all the JSON files of the dataset and multiply all the geometricError values by 16, and at least visually this does seem to resolve the issue.

Is it correct then to assume that if a tileset was optimized for MSSE = 1 then scaling the GE by 16 would make it optimized for the default value of MSSE = 16?
It would make sense as SSE seems to be linear with respect to GE, but I do wonder if there are other factors that could effect the performance when displaying multiple large tilesets ‘converted’ using this method.

There are some aspects in the geometric error that could be described more extensively and precisely. And we’re are considering different possible clarifications, but as long as this is ongoing, I have to remain a big vague here.

Simply scaling the geometric error of all tiles of a tileset can bring the desired effect here, but it will also affect the performance: The viewer will have to load more tiles, simply because the viewer tries to load tiles with lower and lower geometric errors, until the loaded tiles cause the maxiumScreenSpaceError to not be exceeded.

The SSE will usually linearly depend on the GE, but also takes the screen resolution into account. And beyond all that, on the level of “brainstorming”, one could consider further options for “prioritizing” the tile loading (e.g. whether an object is “large” (in terms of the bounding box), or whether it is in the center of the screen). But these would be highly domain- and use-case specific, so the geometric error (in combination with the SSE) is the most important and most generic steering factor here for now.

1 Like