Recommendations for testing performance of 3D Tiles

Hi

I’m interested if there are any thoughts on the most appropriate way to track performance statistics whilst using 3D Tiles in Unreal’s cesium plugin?

I know the usual stat fps, and stat unit is useful for quick review, but I need something a little more repeatable and consistent.

I’ve previously used a blueprint to capture GPU/CPU metrics across a few fixed camera positions at runtime, and then reviewed the resulting data in Unreal insights to check where there are any issues with overloading the rendering pipeline.

However I’ve only tested static meshes and traditional landscape assets. So for streaming 3D Tiles, are there any things to take into account, like is it more important to have a moving camera whilst capturing performance? Should cache be flushed before each test is carried out etc?

Thanks for any help

I guess it depends what your purpose is in capturing these statistics. What are you actually trying to accomplish?

Essentially to check any degradation in performance as I add more 3D tiles, particularly building/model layers. And it would ensure I can have a baseline and benchmarks for every significant change made in the project, to catch any impacts early, and quickly identify changes that have made a negative impact on performance.

Yeah repeatability is going to be the hard part there, as 3D Tiles loading performance is inherently affected by things like network latency. I don’t have a good answer for you. Watching for unexpected frame time spikes is a good start (though it’s possible that a moderate frame time spike can actually indicate improved load performance; it indicates more data was received from the network and sent to the GPU in the given frame). You could look at total load time with a known and fixed view. And perhaps a warm cache to take the network out of the equation.

Thanks Kevin, that’s a really good point re warm cache, I guess testing with cleared cache only really has a bias towards someone’s bandwidth connection. rather than a focus on any GPU rendering/memory bottlenecks.
Should I keep an eye out for anything in relation to memory - GPU or system?

In regards to ‘warm cache’ tests - is this simply a case of flying around the area ensuring that the cache populates the SQLite database in Unreal’s FPaths::EngineUserDir() directory?
And potentially increasing the figure in ‘Maximum Cached Bytes’?

Maximum Cached Bytes is the size of the in-memory cache. Warming that up is good, too, but the more essential thing is the disk cache. You can change its size in Project Settings → Cesium.

Flying around randomly in the area you’re interested in won’t work all that well. With large tilesets, the number of tiles in even a relatively small area is enormous, so getting them all is unlikely. Better to follow a particular consistent flight path. If you use the Sequencer and record a video of the flight, Cesium for Unreal will automatically ensure each tile is loaded for each frame, so that’s a good way to make sure none are missed.

Be aware, though, that the Google Photogrammetry tileset, if you’re using that one, isn’t cached because its HTTP response headers don’t allow it. To be precise, the headers allow caching, but then require that the response is revalidated each time. Cesium for Unreal currently treats this as “don’t cache it at all.”

Monitoring memory usage on both the CPU and GPU is probably straightforward and worthwhile.

Thanks, I appreciate the clarification.

Regarding ‘Maximum Cached Bytes’ - is that System memory (RAM), or GPU memory?
When and how frequent is this automatically flushed?

Regarding Disk Cache - I’m assuming increasing both of these is fine to ensure a good cache, would doubling be a good place to start?
image

Maximum Cached Bytes is actually measured as the sum of the bytes downloaded from the network for the currently-loaded tiles. It’s measured after gunzipping the network response, but before format-specific decompression like Draco and JPEG decoding. So the actual memory usage on both the CPU and GPU is usually higher than this number.

This limit is checked every frame, but tiles will never be unloaded if they’re needed to render the current frame. So you can set it to 1, for example, without breaking anything, but the actual size of the loaded tiles in that case will surely exceed 1 byte.