multiple_images100.zip (12.6 KB)
We are having problems loading a lot of images to the globe (see example).
Looks like loading more than 200 images crashes the client.
The images are small (<300kB).
Any workarounds for this?
There isn’t any limit in Cesium for the number of SingleTileImageryProviders that may be used.
What are the target environments (OS, RAM, GPU, GPU memory) for your application? Is the complexity of the example (100 rectangles x 6 images) representative of the final application?
we have Win11 (OS), 32GB RAM, 8/16 CPUs, NVidia RTX A2000 GPU.
Final complexity is not 100x6 images (should be more), but we have extract some data and by increasing amount of images the browser stack. More-less all about 50x6 images stack the browser.
Current experience is, that 100x6 images takes 15.8GB RAM and browser randomly crash.
Images takes on filesystem 217MB and browser allocate 15.8GB? Why browser allocate so much memory for that?
This feature is more-less unusable for 50+ images…
How to properly load 100+ images to cesium?
Hi there, would you be able to provide a code example for duplicating the crash?
Hi, I have provide only part of data, because I don’t want to upload 178MB of images. Here is a sample with images.
multiple_images_data.zip (5.6 MB)
To start whole example on sandcastle.cesium.com you should start locally some web server (Apache, NGinx, or something else) on port 12000 and put all images to web root folder under “/data/david/test”. If you want put your images somewhere else, than simply adapt HTML file. Current expected URL is:
http://localhost:12000/data/david/test/slice_0_z20.png (only example)
To test HTML file you can rename existing PNG files to missing files and test whole set with 100 files.
Thx for response. If you find any hint, why cesium need so much memory and browser crash let me know.
Thanks for the additional info @Rastislav_Samak!
Have you considered using any other variety of imagery provider to load the imagery data?
Hi, the other possibility is to use Cesium.UrlTemplateImageryProvider, but problem is, that for such a purpose we need to cut data to tilesets and generate for different layers proper images. This cause exponential increase of files on filesystem and additional network traffic to download images when somebody zoom-in on globe.
For our purpose would be ideal SingleTileImageryProvider, but problem is that browser crash even with a small amount of files…
Is it really so, if I want to load 100+ images to cesium, than I should only use some tile oriented provider?
So loading 100+ single tile image providers all at once is putting everything into memory at the highest level of detail, even if it’s not being rendered. This is causing the performance hit and potentially a crash depending on your hardware.
Dividing up the images into tilesets ahead of time allow the client to only fetch the required resources and lessen the load on the client. This is at the expense of some processing time up front plus the space on disk as you mentioned. However this is a very common paradigm in GIS apps, both 2D and 3D, to provide the best experience for the end user. As such, there are lots of tools available for generating tilesets, many of which are open source.
Is there a reason you are trying to reduce disk space usage and network traffic?
Thx for response…
Simple costs. On cloud you are paying for data transfer and data amount… Even network traffic slow down scene rendering, because new object need to be downloaded and rendered.
Only comment: Currently we have 2,33 TB big data and each new data source create new amount of data… This 2,33 TB data are not only transparent images…