I am noticing slow responses from a cesium application and want to know what we can do to improve it.
I am using vm with an intel xeon 4 core CPU and 16 GB ram. Do I need to add more cores or maybe a GPU? In what scenarios, should a GPU be used for cesium?
CesiumJS relies on WebGL, which is hardware accelerated using the GPU. You should have no issues running CesiumJS on an integrated, discrete or mobile GPU. Could you share a Sandcastle demonstrating a sample scene where you are seeing performance issues? Additionally, it would be helpful if you could share the output from WebGL Report.
I cannot share a sandcastle because the data is proprietary.
Here is the WebGL report:
That’s ok @rpereira1.
From the WebGL report, it seems like SwiftShader, a CPU-based implementation of Vulkan is being used. I think adding a GPU would help performance quite a lot.
What kind of VM is this? Some environments support either pass-through or (better) emulated integrated GPUs. Falling all the way back to SwiftShader means that the browser couldn’t use any kind of accelerated video driver, and it’s a big red flag.
I’m running into this with users who are in a secure VM environment, where it’s a known issue. They suggested I file a bug with Google – feel free to follow along there.
Of course, if Cesium can do anything to optimize its performance under SwiftShader, that would be great too. But for @rpereira1 , if you can possibly get off of SwiftShader, you really should.
Thanks for the response! I will check the type of VM and see if we can get a GPU.
Adding an integrated GPU might take awhile. Are there any other options we can use to make the vm more performant? It currently has 16GB of RAM and 4 cores. I can see the 4 cores are at 100% usage so would adding more cores help? Any other options?
I can see the 4 cores are at 100% usage so would adding more cores help? Any other options?
I can only guess that since you’re using SwiftShader, which is using the CPU to essentially emulate a GPU, perhaps adding more cores could improve performance - but that’s a SwiftShader concern not a CesiumJS one. Typically, a single instance of CesiumJS running on a system would not demand a CPU usage that high (unless there is lots of custom geometry etc. being updated regularly).
Just to make sure this is clear, “integrated” GPU means one built into a hardware CPU. A standalone video card is a “dedicated” GPU. The idea of “adding” an integrated GPU to a VM really means passing through hardware-accelerated video rendering instructions to the (real) GPU portion of your host CPU. Typically, this is either a pretty simple configuration option in your VM environment, or it’s totally impossible. In the case I linked above, it is totally impossible to use hardware acceleration via integrated (on-CPU) video drivers, for security reasons.
So, maybe you could provide a bit more detail? It could be possible, or impossible, but I can’t think how it “might take a while”.
It is a process for the customer to add GPU’s. They are planning to loan me another vm with a dedicated GPU