When you are talking about different MaximumScreenSpaceError values, then I assume that the difference that you are talking about is actually in the geometricError
values of the tilesets. The geometricError
is used as a basis for computing the screen space error, and the maximumScreenSpaceError
is only the threshold at which the viewer should try to refine the model, by loading tiles with a lower geometricError
(and therefore, tiles that cause a lower sceen space error).
That said: There is no common scale for the geometricError
. The values for the geometricError
highly depend on the nature of the data - for example, whether it is a CAD model, or terrain data, or something that is not even ‘mesh data’, like a point cloud, or a simple mesh where the main difference in the levels of detail is in the resolution of the texture that is applied to the mesh. And it is true that the same 3D model could be converted into 3D Tiles by two different tools, where one assigns geometric error values between 100 and 1000, and the other one assigns geometric error values between 1 and 10.
In CesiumJS, you can select different values for the maximumScreenSpaceError
for each tileset. As an example: The TilesetWithDiscreteLOD contains a model with three levels of detail, with geometric errors being 100, 10, and 0.
When this tileset is added to a viewer twice, once with a maximumScreenSpaceError
of 1 and once with 512, then you can see the difference in the refinement behavior: For the first one, the highest level of detail is loaded immediately. For the second one, the highest level is only loaded when zooming in very closely.

There is no silver bullet for this value, and no approach to “make it universally correct, all the time”. But the tileset-specific maximumScreenSpaceError
can be used as a “steering factor” for tilesets with different magnitudes of geometricError
values.
Is there a way to convert a 3DTiles model optimized for one value of MSSE to a different value of MSSE?
There is no tool that can do this automatically. However, it would boil down to just traversing the tileset, and adjusting the geometricError
value (for example, scaling it by a certain factor). We can definitely consider to add such a functionality in the 3d-tiles-tools
is there is broader demand for that.
The sandcastle for the comparison, just in case someone wants to try it out:
const viewer = new Cesium.Viewer("cesiumContainer");
async function createTileset(offsetX, msse) {
const tileset = await Cesium.Cesium3DTileset.fromUrl(
"http://localhost:8003/tileset.json", {
debugShowBoundingVolume: true,
debugShowGeometricError: true,
maximumScreenSpaceError: msse
});
viewer.scene.primitives.add(tileset);
const translationMatrix = Cesium.Matrix4.fromTranslation(
new Cesium.Cartesian3(offsetX, 0, 0),
new Cesium.Matrix4()
);
const modelMatrix = Cesium.Matrix4.multiply(
translationMatrix,
tileset.modelMatrix,
new Cesium.Matrix4()
);
tileset.modelMatrix = modelMatrix;
return tileset;
}
try {
const tilesetA = createTileset(0, 1);
const tilesetB = createTileset(1500, 128);
const offset = new Cesium.HeadingPitchRange(
Cesium.Math.toRadians(22.5),
Cesium.Math.toRadians(-22.5),
3000.0
);
viewer.zoomTo(tilesetB, offset);
} catch (error) {
console.log(`Failed to load tileset: ${error}`);
}