I have created a script for importing shapefiles to Unreal Engine and then generating Cartographic Polygons from the point data. The polygons are then used to cut 3 different tilesets (Google terrain outside a defined area, and terrain + buildings inside the defined area). All in all I use about 120 polygons across the 3 tilesets, with a combined resolution of the polygons in the 1000s, perhaps 10 000s.
With this setup, I’m seeing my FPS drop from 60+ in editor to 20-30 when using the clipping. This also seems to be the case in packaged game.
Any suggestions on how to optimize this? Is the main issue the amount of polygons, or the overall resolution? Could I for example improve performance by merging polygons?
Regards, Gustav.
Hi @Simstad, welcome to the community!
First, I would ask whether you have Exclude Selected Tiles enabled on your Cesium Polygon Raster Overlay. When this setting is enabled, any tiles that fall completely outside of the polygon will not be loaded, meaning the app won’t waste time or memory to load their geometry and textures. However, this does require a per-frame check to test if each tile is excluded by the polygons associated with the tileset.
I suspect the answer is yes, given the performance drops that you’re experiencing.
In that case, I think it’s definitely a combination of both resolution and the amount of polygons. Under the hood, there is math that gets complicated by both the number of polygons and the number of vertices in each polygon. Of course, there’s an initial bounding volume check to prevent this math from happening when it’s unnecessary. But regardless the performance is going to be affected.
If you’re interested, you can check out the math for that here:
This rectangleIsOutsidePolygons
function gets used by RasterizedPolygonsTileExcluder
here:
I’m interested to see how performance changes when Exclude Selected Tiles is turned off. This does result in geometry loading, but perhaps the performance is better than trying to query per-polygon which tiles should load every frame. The areas that are clipped by the polygons will actually be hidden regardless, due to a technique in the default Cesium materials. But it happens due to a pixel shader effect, so the geometry will still exist under the hood.
I hope this explanation helps – let us know if it’s feasible to turn off that setting.
Hi again! Just wanted to update you on my issue.
I could not see a difference in performance by changing Exclude Selected tiles.
However, it seems that the problem was the resolution of my polygons.
I processed the data in QGIS, merging some of the polygons and reducing the data amount to about 10% of the original data.
This increased the performance with about 30 FPS, so it seems to have been the culprit!
Below is a new issue that I’m having. I thought that it might be related to the above, otherwise I can start a new issue
However, now I’m having issues with lag hitches in the packaged game, with massive lag spikes caused by Cesium:AsyncLoading observed in UnrealInsights. What is strange is that I don’t experience this in the PIE editor. I would have expected the packaged game to perform better, not worse than PIE. It seems that the main thread is waiting for the Cesium tiles to load, which lags the camera controls tick. I have found 2 things to improve the performance, or at least reduce the lag spike time. One is to reduce the simultaneous tile loads to about 2 (which means 6 across the 3 Cesium assets). This lowers the lag spike time, but don’t remove it completely.
The other is to start the .exe for my packaged game with -corelimit=1. That seems to smooth thing out but of course lowers the over all performance.
Any idea what might be causing this? What settings can I try try? I have already experimented with a lot of the tile settings, but perhaps I’m missing something?
Could also look at more specific things in UnrealInsight, but might need some guidance as I can’t find more specific details about the tile load process.
Regards, Gustav
It seems that the main thread is waiting for the Cesium tiles to load, which lags the camera controls tick.
If the main thread is waiting for tile to load, this is usually caused by the presence of a running Level Sequence in your level. Because Level Sequences are often used to record videos, Cesium for Unreal drops into a mode where it makes sure all data is loaded before continuing each frame. This is of course not always a good assumption!
The issue is described here, and there is a workaround listed:
Hello!
At first I thought that this was the answer, since we were doing a movie production parallelly and thus had a lot of sequencers in the level and sublevels. However, neither trying the suggested blueprint code or removing all remaining level sequences had any impact on the lag spikes.
I did in the end realize that the lag had started after a specific commit from my colleague who was working on the movie. I was not able to pinpoint what issue was introduced, but reverting the project to before the commit, and then cherry-picking some needed assets I was able to decrease the lag significantly.
So now it is running better, but there is still noticeable lag caused by Cesium:AsyncTask, which the game and render thread seems to be waiting for.
I’ll add a screenshot from Insight, maybe you can make something from it.
I’m not sure what to make of that @Simstad. loadModelAnyThreadPart
happens in a background thread (via Unreal’s task graph), and the game and rendering threads should never block waiting for it. Are you sure that is what you’re seeing? FWIW, 200+ms is unusually long, but not completely ridiculous, especially if you have a very complicated model or detailed textures.