Hiya,
It’s hard to give concrete suggestions, as it’s almost an endless stream of things that can be looked at. This sounds like a fairly comprehensive application, so any part of it can be tweaked and looked at.
Let’s start with data. You say CityGML, for example, which Cesium doesn’t support natively unless you’ve converted it into something like 3DTilesets. Tilesets can always be tweaked in terms of performance with things like SSE and geometric error for different zool levels of your tiles, even though the defaults for this are fairly good (I think it uses a halving per zoom level per tile level, from memory?). A lot of this optimisation comes from how you generate your 3D Tilesets, with what tools and software. Orthos and pointclouds would be the same, although with orthos / raster tiles you can play with the zoom level error levels a bit easier. For pointclouds there’s a whole different kettle of fish, but it doesn’t sound you like got those. You didn’t mention if you had any entities (vector graphics, labels, points, lines, etc.) so I’m assuming you don’t, but if you do there’s the common API for most normal things (where Cesium does its best automatic work), as well as a primitives API where performance is king (but you have to do more of the work).
Then the browser environment. Are you using frameworks for the JS front-end, or rolled your own? Native, JQuery, Backbone, templates, change detection, event management, what? How much real-time JS processing is going on (both of what you know and what you don’t know)? This started with hover functionality, so how much difference does the hover functionality make? What numbers are we talking about when you say performance? What’s the expected vs. actual FPS as you’re moving around the scene? Can you measure what each real-time process cost in terms of time, and reduce all functionality to measurable parts?
I use Angular2, for example, wich comes with it’s own kettle of performance issues, mainly around change detection and shadow DOM. React and Vue is stronger of the CD, but weaker on chaining and events. And so on. How much real-time processing is going on, and what can be done about it? Can real-time be reduced to near real-time? And so on.
Finally, load times and performance also comes from servers, the number of calls, animation interaction, callback properties vs. static, and on and on. In your Lighthouse timings, no one should be surprised at how much time is spent in Cesium, and I have to say, and execution time of 15ms is not bad at all. So here’s a common thing to check; does Chrome (or your browser) run through a GPU, or is it using the CPU for it’s 3D rendering? It’s not GPU by default on a lot of machines.
Apart from that, a lot more detail is needed for proper performance tweaking. Try to measure in your JS code how long certain operations take (let start = Performance.now(); …time passes… let timeTook = Performance.now() - start;) and start mapping out where the issue is coming from. I belive in Lighthouse (or the debugger) you can map a dynamic app to it and get better performance maps out of it.
Cheers,
Alex