Bad performances with San Diego 3DTiles from CDB

Hi,

i have tried to convert the San Diego CDB terrain to 3DTiles using cdb-to-3dtiles:

The San Diego terrain is available in the cdb-to-3dtiles project page.
(A texture is missing the this terrain)

When i try to display the terrain in Unreal Engine using the Cesium plugin, it is very very slow (1fps).
i display the terrain in local on my PC using:
file:///C:/MySanDiegoPath/tileset.json

Is it normal ? how can i improve performances ?
Maybe because of the number of buildings ?
Is Cesium designed to automatically optimize the rendering of buildings? Should we do it ourselves? (How ?)

There are different aspects of “performance”, and different possibilities to increase the performance for various cases. When you are talking about the display performance (referred to with “1 frame per second”), then this might be due to the complexity of the model. Fortunately, 3d-tiles and Cesium For Unreal allow you to tweak the peformance in this case, by finding the right balance between “performance” and “visual quality”.

One thing that you could try:

  • Select the tileset in the “World Outliner”
  • In the “Details” panel, go to “Cesium → Level Of Detail”
  • Change the “Maximum Screen Space Error” from the default value (6.0) to a higher value.

(Maybe start with a vastly higher value, like 50 or so (which means “low quality, but high performance”), to get an impression of the effect that it has, and lower the value as you see fit).

If this does not solve the problem, then this particular data set and setup may have to be investigated further.

Thanks.
i have used a “Maximum Screen Space Error” value of 1600 instead of the default (16).
It’s faster but with very poor quality.
Maybe the SanDiego Terrain do not have LOD. Far Buildings are always displayed.

Is there some options to modify the LOD “switch” distance to camera ?
Is it possible to hide some building after a maximum distance to camera ? (ignoring LOD)

The Maximum Screen Space Error allows you to control the trade-off between “visual quality” and “rendering performance”. Some details about the meaning of this value are explained in the 3d tiles overview at 3d-tiles/3d-tiles-overview.pdf at main · CesiumGS/3d-tiles · GitHub (page 5). The basic idea is: Each tile has a “geometric error” From this geometric error, the screen space error is computed. If this error is too large, then a more detailed representation of the tile is loaded.

So… if 16.0 causes a good quality but a low performance, and 1600 causes a low quality but a good performance, you might want to try 800.0 and see whether the trade-off is right there…


Referring to your example: “Far Buildings are always displayed.” - that’s right. But they should be displayed with a low resolution, and thus, not affect the performance. Only buildings that are close to the camera should be rendered with high resolution.

But … it might also be that the input data does not have a “sensible” geometric error. For example, if the geometric error was the same for all tiles, regardless of their actual detail, then the whole approach wouldn’t work. I don’t know the data set or the related tools. Some insights might already be gained from looking at the tileset.json. But again: If this cannot be figured out here, then we might have to take a closer look at the resulting tileset (if it can be shared publicly).