How can we control how many tiles are loaded at any given time? We want to load more tiles closer to us and fewer farther away, or even stop loading tiles that are a certain distance away. Likewise how can we unload tiles when we leave an area? We are working mostly with Oculus Quest 2, it works but we notice the memory usage gets quite high and leads to crashes. We tried unchecking Preload Ancestors and Preload Siblings but then it becomes too slow, and adjusting the Maximum Simultaneous Tile Loads and Loading Descendant Limit had some effect but still crashes even when we set them say to 5.
There’s an option under the hood called “maximumCachedBytes” which would have the effect you want, but unfortunately it’s not exposed in the UI as a UProperty yet. I just wrote an issue to do that:
We tried setting this to a lower number in TilesetOptions, even as low as 32 instead of 512 Mb, and it had some effect but less than we had hoped and still seeing memory issues. Is there anything else we can try? Ideally we’d like to show in high LOD in a radius of say 200-300 meters, and anything beyond that should be in low res. Also it seems like currently it’s cumulative so although at 512 Mb we crash quicker, on 32 Mb it still uses up memory and crashes after 2-3 minutes of moving through space, or turning around in various directions. Again, seems like ‘purging’ is slower than adding new tiles.
Are you doing this in the Editor? The Editor unfortunately does no garbage collection, so memory will always grow without bound in the Editor. You can force garbage collection every frame by running
gc.CollectGarbageEveryFrame 1 in the console.
If that doesn’t help, you can try setting the maximumCacheBytes property to 0, which will mean no tiles are cached beyond what is currently needed for rendering. If that’s still doesn’t help (it sounds like it won’t), the next possibility is that there are simply too many tiles required for rendering. You can adjust that by changing the maximumScreenSpaceError property (at the cost of reduced quality).
What tileset is this, and what did you use to create it? 3D Tiles is very flexible in how the tile pyramid is arranged, so some tools produce very poor pyramids which require a huge number of tiles to be rendered for typical scenes.
You can save some memory by turning off physics mesh generation if you don’t need them. It’s an option on the Tileset:
And just to make sure, you are the latest Cesium for Unreal release, v1.6.0, right?
Thanks @Kevin_Ring - to clarify, this is when running on Oculus Quest 2 with the Melbourne 3D Tiles that Cesium provides, so we can’t run the garbage collection because it reduces the performance of the application. We can try the maximumScreenSpaceError, but an ideal situation is to have it render at high quality up close in a given radius, and having low quality beyond that. How do we implement this logic?
Cesium for Unreal automatically uses detailed tiles up close and less detailed tiles far away. That’s standard behavior. There’s currently no way to make it “even less detailed” far away. It sounds like an appealing feature, but it would probably just be frustrating in practice. For example, “high quality within a radius” would require you to choose that radius. And there’s likely no one radius that would look good in a variety of scenes (e.g. different camera heights, different distances to the “interesting” elements of the scene like mountains or buildings). The only situation where I would imagine that would be useful is if camera motion is extremely restricted.
Still, the Oculus Quest 2 should be powerful enough that it shouldn’t be running out of memory and crashing. Unfortunately I don’t have access to an Oculus Quest 2 to test with. Are you sure OOM is the cause of the crash? Does it always crash when visiting a similar area, or is it just a matter of how long you spend flying around? Is there any possibility there’s application logic that is causing a memory leak? It’s of course possible that there’s a memory leak in Cesium for Unreal itself, but we’d expect to see that on more powerful systems, too (eventually). To my knowledge, we haven’t seen that. You are using v1.6.0 right?
Also can you please confirm that you’re testing outside the Unreal Engine Editor when you see this crash? I’m not sure it would be possible to run on the Oculus inside the Editor, but I do want to make sure because memory growth due to lack of garbage collection inside the Editor is a known problem and not something we are able to fix (it has to be taken up with Epic).
@Kevin_Ring yes this is outside of editor. Yes this is using v1.6.0. We’re using OVR Metrics to measure and we can see the UMEM climbs to 4300 or so before it crashes. And it seems to be cumulative: As soon as spawn in the map, its starts climbing until it crashes. Regarding the radius: That’s common m.o. in VR, to have multiple LODs, so shouldn’t be an issue. Much like in the real world where you’ll see things blurry if they’re far away, same happens here. Sounds like you’re already doing LODs, we just need to fine tune them so they’re a bit more sensitive. And no, Quest 2 is not very powerful. Usually at total 20-30MB texture at say 300K polygons, it will crash… so it’s important to make sure that the player never accumulates anywhere close to this threshold at any given time. We just need to purge as fast as we gain based on a threshold
Sounds like you’re already doing LODs, we just need to fine tune them so they’re a bit more sensitive.
Yep, that’s exactly what maximumScreenSpaceError does. If the Quest can only tolerate 20-30MB of textures, the Melbourne tileset is going to be extremely unlikely to work well at the standard quality, unfortunately.
@Kevin_Ring isn’t maximumScreenSpaceError a way to apply a lower quality to everything? What if we want to maintain high quality for the immediate surroundings - but custom. Say yours gives 300 yards, and we need, 100 yards radius at highest resolution for our use case. From where and how can we control this? We could then play with it such that it doesn’t hit the threshold?
That’s not an option, because that’s really not how 3D Tiles works. There’s no notion of “highest quality for a radius.” It selects the appropriate quality by targeting a particular pixel error regardless of distance. Lower quality geometry and textures are needed for a particular pixel error in parts (tiles) that are far away, and that’s what Cesium for Unreal gives you (automatically). The Maximum Screen Space Error property specifies that pixel error value to target.
The computation of the ScreenSpaceError (SSE) from the GeometricError is the same for the whole screen (i.e. for everything that is visible). And it only depends on the distance of the object. (And on the screen size, but we can ignore that for now). So right now, the
maximumScreenSpaceError is the main way of adjusting the trade-off between performance and quality, in a rather generic and intuitive way: When the quality is too low, one can decrease the
maximumScreenSpaceError. When the performance is too low (or the memory requirements are too high), then one can increase the
Going beyond that: One could certainly make a case for other strategies (and I moved that discussion to an issue ). But it’s certainly not trivial conceptually, due to the many degrees of freedom.
You talked about a “higher resolution for things that are closer to the viewer”, and a “threshold” of, say, 100 yards. But the distance will always have to be taken into account, within and outside of the specified threshold. For example, when something is 1000 miles away, you have to use a different computation than for something that is 101 yards away. (This could be addressed with the non-linear distance approach that I mentioned in the linked issue).
Further, it depends on the use case: What you described certainly makes sense for an application where you dedicatedly want to look at a single building (that is close to you), and don’t care much about the rest. But imagine looking at a city skyline from far away: All buildings would then be shown in low resolution, but the terrain tiles right in front of you would be shown in high resolution (even though you don’t care about the terrain - you want to see the skyline, and it should look nice). One could similarly make a case for a strategy where one uses a higher resolution for things that are near the center of the screen - because that’s what you are looking at.
So there are many possible strategies for tweaking the computation of the SSE, depending on different criteria: The distance, user-defined thresholds, view direction, maybe the duration that you’ve been looking at something, and maybe you even want to dedicatedly ‘pin down’ one certain building/tile to have a higher resolution, regardless of all other criteria. And all this might be part of future developments. But implementing all this involves generalizing the algorithms, but also (importantly) offering it in a form that is easy to configure for users (e.g. for defining these “thresholds”, and combining the different criteria in a sensible and intuitive way).
@Marco13 @Kevin_Ring Hey guys - thanks for your initial input. We’ve tried to play with the maximumScreenSpaceError using an exponent as mentioned in the linked github issue. We still wish we could figure out how to ‘release’ tiles from memory faster. Seems like we’d have to wait 1 min before things get released, if you have an idea please let us know, maybe there’s a way to trigger garbage collection faster? Meanwhile we have a related question on World Terrain Asset ID 1: we’re using it as a base over which we place the cities. We noticed that when we go into wireframe mode in the UE editor, that it’s loading tiles that are far beyond the current horizon. Likewise when we go in app, there’s a high usage of memory allocated for Word Terrain, even though we’re only using it as a base, while actually navigating the 3D City Tiles above it. How can we load tiles in a defined radius? Again, sounds like we have a particular use case so we need the ability to dictate zones where tiles won’t load.