Draco Decompression time

HI,

I’m looking at reducing the time it takes for models to be loaded in CesiumJS, we currently have over 100 photogrammetry models spanning over a fairly small site (internal and external meshes) - not surprising that they take some time to load.

These were tiled with draco compression level 7 and webp at 80 compression (defaults). Now retiling these models at compression 10 and webp 70.

I’m looking at finding the right balance between model size / decompression time. Would it be safe to say that higher compression levels, although taking longer to decode, will still stream faster than models with a lower compression level (6/7 for example)? Or will this depend on a users GPU?

Is there any way to check the time it takes for draco compressed models to be decoded? Or can we only compare network speeds against other models with different compression levels?

Thanks,
Tom

Hi Tom,

Your understanding of the tradeoffs is generally correct, although note that the decoding happens on the CPU, and it’s the quantized data that gets upload to the GPU. So only the user’s CPU speed/cores matter for decompression. But other than that, yes: it depends on the data itself (how much more is it really getting compressed when going up in compression level) and the user’s CPU (does the reduction in the amount of data being streamed outweigh the extra time needed to decompress).

The initialTilesLoaded event is useful for measuring the time it takes for a tileset to resolve. That might help you make the right decision for your data.

Here are a couple additional resources as well:

  1. Compressing Massive Point Clouds with 3D Tiles and Draco – Cesium
  2. Draco Compressed Meshes with glTF and 3D Tiles – Cesium

Matt