Our configuration is:
Chrome version 49
CesiumJS version 1.19
Our web app consists of many different layers of data to be plotted on a cesium map. We are using the Viewer (var viewer = new Cesium.Viewer). Each layer has its own variable that stores the data source collection, so for layer X we have
var XDataSources = new Cesium.DataSourceCollection();
If a user wants to see data about ABC, we use a REST API and retrieve the data from our backend. Then if the data belongs to layer X, we create a new DataSource (new Cesium.CustomDataSource) say Y, then for each data point retrieved, we add an entity in Y's entity collection.
Finally once all the entities are populated in Y. We add the datasource to our Viewer's datasource collection:
The reason we store all these different datasources is so when the user decide to remove ABC from the map, we keep it in memory, and if he/she decides to add ABC again we don't call the API, we plot it from memory. So the datasources don't get destroyed to avoid repeat API calls.
This has been working great for most of our use cases however, we recently had a query that returned 32K records to be plotted, meaning we created 32K entities. What's happening is that the browser memory ramps up to 1.8Gb and then breaks. In some computers it goes to 1.2GB and plots but still ramps up very quickly. We check the memory using Windows Task Manager.