I am investigating the performance of Cesium for displaying visualizations with a decent amount of entities (~300) and a large amount of samples (~500,000).
Currently, my entities only have a position, orientation, billboard and model. The position and rotation are SampledProperties. and the streaming (over websocket) rate of addSample() is 60Hz per entity.
Now, I would like to know if there is any "fast path" for uploading my samples into Cesium, akin to how vertex buffers work in glTF. That is, the file literally stores the in-memory representation of the WebGL attribute data as a Float32Array.
I did look at the source code for EntityCollection and noticed it is backed by a AssociativeArray which in turn is a dual-representation data structure backed by both a JS Object and a JS Array. Unfortunately I can't just drop a chunk of Array in there, since that would break the associative part of the AssociativeArray.
Also, I have seen on this message board some recommendations to avoid the Entity API and use BillboardPrimitives when dealing with large collections. Is this still the case? Is the advantage primarily in storage or compute, and what does one give up by abandoning the Entity API?
I am open to any other suggestions or tips as well. I think the Cesium documentation as a whole is great, but I have had to really scour for tips on handling large non-trivial data sets! EntityCollection.suspendEvents() seems like the only feature tailored towards bulk insert.
Another thought I had was to use CZML data sources instead of manually creating entities and sampled properties, but that sounds like it might actually have more overhead if the updates are occurring at small, frequent increments.
Thanks so much!