Excessive memory consumption due to textures

When drawing the 3D tile with IonAssetId=2624490, I think the texture is consuming too much memory.
The texture file for the 3D tiles is 438 MB, but checking 3DTilesInspector, it appears to consume over 5,000 MB. (It is checked by setting maximumCacheOverflowBytes to 10 times the default value.)

If maximumCacheOverflowBytes is set to the default value, the model shape cannot be drawn properly, perhaps because memory is compressed by the texture.

Sandcastle Sample
(Please note that it will take a reasonable amount of time for the information to appear.)

Looking forward to your replies.

One high-level answer here is that the size of an image file does not reflect the amount of memory that the image/texture will occupy.

For example, when you drag-and-drop a 3D model (glTF/GLB) into https://gltf.report/ , then it will show the “size” and the “VRAM”

The “size” is basically the size of the texture when it is stored as a PNG file - which is 16.3 KB here.
The “VRAM” is an estimate how how much memory this texture will occupy on the GPU. And this is 1.4 MB(!) here.

The tileset contains many GLB files (or B3DM files, which in turn contain GLB as well). And even though the textures only occupy 438MD on the disk, they will occupy much more space on the GPU when they are rendered.

There are different ways of how this could be addressed. A very simple one could be to reduce the resolution (i.e. the size in pixels) of the given textures, for example, from 2048x2048 to 1024x1024 (or even less). From the screenshots, the textures do not seem to aim at “high visual quality” (in terms of a “realistic appearance”) anyhow, but are mainly intended as a basic representation of the materials.
A slightly more complex approach is to store the textures not as PNG (or JPG) files, but to compress them to KTX2 (assuming that they not already are stored as KTX2 - we don’t know that yet…). With KTX2, the textures occupy much less GPU memory, even though they can have the same resolution and similar quality. (Compressing to KTX2 is slow, though…).

(For certain types of models, you can choose “KTX2 compression” when uploading the data to Cesium ion, but this may not be available for all model types)

Thank you for your prompt reply.

I understand that the PNG format is compressed and takes up much more space on GPU memory.

As to another question, the image in the first post shows a close-up on a specific object (tile), and I think the textures that are really needed at this time are only a small portion of the total, but the memory usage is actually as if most of the textures were loaded.
It seems to me that either the tiles or the memory management is not optimal.
(Note that these 3D tiles were created by myself and not generated by Cesium Ion. The original file is in a format not yet supported by Ion, so I have not been able to verify this.)

Also, what happens if I set maximumCacheOverflowBytes to a value greater than the device’s GPU memory and Cesium’s memory usage exceeds that of the device?

As to another question, the image in the first post shows a close-up on a specific object (tile), and I think the textures that are really needed at this time are only a small portion of the total, but the memory usage is actually as if most of the textures were loaded.

There are some details about the memory management and tile loading that I might not be able to explain adequately. But some points that are related to that:

  • CesiumJS will load the tiles, depending on the required level of detail and the camera/view configuration

  • By default, the tiles that have once been loaded will be cached. This makes sense. Imagine that you…

    1. look at the whole tileset (and load many tiles with a low level of detail)
    2. zoom in (and load some tiles with a high level of detail)
    3. zoom out again to look at the whole tileset

    In step 2., it should not throw away the tiles that have been loaded in step 1. Because if these tiles are not cached, then they will have to be loaded again in step 3.

  • There is a limited amount of memory. So it’s not always possible to cache all tiles that have ever been loaded. When necessary, tiles will be removed from this cache

But there may also be the case where the tiles that are required for a single view already occupy more memory than is available. So even without zooming and panning, there may be 100 tiles, each occupying 10MB, so it would need 1GB, and maybe that simply does not fit into memory. If this happens, then CesiumJS will print a warning, and try to use tiles with a lower level of detail (which require less memory).

(The effects of actually running out of memory - due to special/invalid settings for things like the cacheBytes - probably depend on the exact system or browser. In the worst case, this may cause a “crash” of the application…)

If there are more specific questions, then maybe someone from the CesiumJS core team could chime in here.

There is a limited amount of memory. So it’s not always possible to cache all tiles that have ever been loaded. When necessary, tiles will be removed from this cache

I had assumed that when zooming in, as in the image in the first post, tiles that are appropriate for the current camera/view configuration and level of detail are loaded with priority, and tiles that are needed when zooming out are removed from the cache because of their low priority.
However, in reality, the texture alone consumes over 5,000 MB, so it seems unlikely that only the necessary tiles are cached. (Since we are zoomed in, there are not many tiles needed for display, and it is unlikely that the cache limit would be reached with just those alone.)

The effects of actually running out of memory

Thank you for your explanation. I understand.

I’m not sure whether this is still an open question. But to summarize this (with made-up numbers)

  • Initially, it may load 100 of the low-detail tiles. Maybe each of them occupies 40 MB, so it loads 4000 MB of tiles into the cache
  • When you zoom in, it may load 10 high-detail tiles. And maybe each of them occupies 100 MB, so it loads the additional 1000 MB, and then has 5000 MB in the cache
    • If this was “too much”, then it would remove some of the first 100 tiles that have been loaded, to free up space for the high-detail tiles
  • When you zoom out, it will show the 100 low-detail tiles again. But it does not have to reload them, because they are still in the cache
1 Like

Sorry, my earlier question was incorrect. Please let me ask the question again.

If I set a large enough value for maximumCacheOverflowBytes, it draws the exact shape and texture as shown in the figure below.

However, when maximumCacheOverflowBytes is set to the default value (512MB), the tiles are drawn with a very coarse shape as shown in the figure below, and it seems that the tiles are not loaded and drawn appropriately for the current camera/view configuration and level of detail. (I think low detail tiles are being drawn.)

What is being rendered above is only a small portion of the total data (circled in red in the figure below), and this alone does not seem to exceed the maximumCacheOverflowBytes (512MB). I expected that the low-resolution tiles would be discarded from memory and the necessary high-resolution tiles would be drawn, but this is not the case.

Analyzing and explaining certain aspects of the behavior can be difficult. It requires a deep knowledge of the implementation details, a deep knowledge about the structure of the data set, and potentially a considerable amount of time.

Two things that you could try out:

You mentioned the maximumCacheOverflowBytes. These are just the “wiggle room” that is added on top of the cacheBytes. You might consider to not increase the maximumCacheOverflowBytes, but instead set the cacheBytes to a larger value than the default. (But I’d have to look up some implementation details, and am not 100% sure whether this will affect the behavior for this specific data set).

You are already adding the “Inspector” in the Sandcastle that is shown in the screenshot. The inspector has a “Tile Debug Labels” section. You could select “Show Picked Only” and “Memory Usage (MB)”, and then hover over some of the visible tiles, to get an idea about how much memory these tiles occupy. But here, some details also depend on the exact structure of the data set. For example: I think that it will always be necessary to at least store the path of tiles, from the root tile to the (high-detail) tiles that you are currently zoomed in to. And… when the root tile already occupies “a lot of” memory, then there’s only very little memory left for the tiles that are currently important.

1 Like

Thanks for the advice.

I selected “Show Picked Only” and “Memory Usage (MB)” and set a large enough maximumCacheOverflowBytes to load a high-definition tile and checked the memory consumption of this. As shown in the figure below, the total for Texture and Geometry was 100 MB.

The maximumCacheOverflowBytes setting required to draw the high-definition tile in the above figure was 8 times the default value (4,096 MB). (Seven times the default (3,584MB) would have rendered the low-definition tile by one level.)
3,484MB (= 3,584MB - 100MB) of memory was consumed separately and not freed up for the high-definition tiles.

I think that it will always be necessary to at least store the path of tiles, from the root tile to the (high-detail) tiles that you are currently zoomed in to .

I don’t know what exactly “the path of tiles” indicates, but is it something that consumes 3,484 MB of memory?

And… when the root tile already occupies “a lot of” memory, then there’s only very little memory left for the tiles that are currently important.

Does that mean that the root tile will always hold information other than “the path of tiles” and will not be freed up for the high definition tiles?

cacheBytes

I set this to 8 times the default (4,096MB) and the high-resolution tiles were rendered. 4,608MB together with maximumCacheOverflowBytes seemed to render even the high-resolution tiles.
(I am not sure of the difference between “cacheBytes” and “maximumCacheOverflowBytes”…)

I selected “Show Picked Only” and “Memory Usage (MB)” and set a large enough maximumCacheOverflowBytes to load a high-definition tile and checked the memory consumption of this. As shown in the figure below, the total for Texture and Geometry was 100 MB.

Note that “Show Picked Only” means that it only displays the memory usage of the tile under the mouse cursor. When you disable “Show Picked Only”,then it will display the information for all tiles (but depending on how many there are, this may be hard to read…)

The maximumCacheOverflowBytes setting required to draw the high-definition tile in the above figure was 8 times the default value (4,096 MB). (Seven times the default (3,584MB) would have rendered the low-definition tile by one level.)

The default value for the cacheBytes and maximumCacheOverflowBytes is 512MB. This is very conservative. On a modern desktop PC (with 16, 32, or even 64 GB of RAM), you can set these values much higher (some additional details below).

I don’t know what exactly “the path of tiles” indicates, but is it something that consumes 3,484 MB of memory?
…
Does that mean that the root tile will always hold information other than “the path of tiles” and will not be freed up for the high definition tiles?

The “path of tiles” refers to the sequence of tiles, starting at the root tile, going through all the child tiles, down to the tiles that are currently visible for the camera. A quick, crude drawing:

Even though the (low-detail) geometry of the tiles on this path is currently now rendered, the tiles on the path (marked in red) have to be kept in memory. And when, for eexample, the root tile occupies a lot of memory, then this may affect the loading of the other tiles.

I set this to 8 times the default (4,096MB) and the high-resolution tiles were rendered. 4,608MB together with maximumCacheOverflowBytes seemed to render even the high-resolution tiles.
(I am not sure of the difference between “cacheBytes” and “maximumCacheOverflowBytes”…)

I did ask about that internally. An attempt to summarize it:

  • The cacheBytes is the size of the cache that is allowed. Increasing this will allow more tiles to be kept in the cache, and can decrease load times when navigating through a tileset, at the cost of a higher memory consumption
  • The maximumCacheOverflowBytes is the amount by which th cacheBytes has to be exceeded before CesiumJS forces the use of lower-detail tiles.

So I think that you could resolve the issue even by only setting the maximumCacheOverflowBytes to a “very large” value. But again: The “best” settings here depend on many factors (the exact structure of the tileset, the way of interaction, and of course, the machine that all this is running on).

And, as mentioned in the first response here: I think that reducing the size (in pixels) of the textures (or maybe compressing them with KTX) could be a good way to significantly reduce the memory consumption. Looking at the last screenshot that you posted: This grainy, noisy, brown texture takes 80 Megabytes of memory. It think that this could be reduced to … maybe 1 (one) Megabyte without significant loss of quality…

On a modern desktop PC (with 16, 32, or even 64 GB of RAM), you can set these values much higher (some additional details below). of RAM), you can set these values much higher (some additional details below).

While the default values may be conservative, this is not necessarily the case for tablets and smartphones, where even the latest iPad models have around 6-8 GB of RAM. Even the latest iPad models have 6~8GB of RAM, and there is not always room for 4~5GB for rendering, so we want to keep memory usage in check if possible.

Even though the (low-detail) geometry of the tiles on this path is currently now rendered, the tiles on the path (marked in red) have to be kept in memory.

Since all of this tile data has refine = “REPLACE” set, I don’t think we necessarily need to keep the geometry and textures of the tiles marked in red. (although I understand the retention of basic information to control tile rendering).

Explanation of maximumCacheOverflowBytes and cacheBytes

Thank you. I understand approximately what you mean.

About KTX compression

We will consider supporting KTX compression.