Some ideas for Liquid Galaxy and Cesium

While having 2 displays rendered by one computer might work well for virtual reality goggles in that resources are downloaded once and shared among the two displays, rendering 7+ Hi-Res displays is probably too much of a burden for one computer. Liquid Galaxy for Google Earth generally has 1 computer/app per screen, though I’m not sure how much programming control the GE clients allow to customize for efficiency.

Perhaps the Cesium client can have 3 modes (each computer with a Cesium app operating in a certain mode) for many displays:

-Orchestrating computer (OC.)

-Relay computer (RC.)

-Worker computers (WC.)

All of these would be on the same LAN. Everything would be Orchestrated by the OC. Only the OC and RC would require an internet connection, while the multiple WCs just need a connection to the OC and RC.

-The OC would adjust the view then tell the RC what resources are required to download for all of the frustum slices. The RC downloads new data while retaining still used old data.

-The OC would tell each WC what frustum slice they are to render. (the OC could also act as one of the WC to render the forward slice.)

-Each WC would request from the RC the data required for only their frustum slice. The WCs would retain still used old data while awaiting the new data.

Although the OC itself would only render 1 small slice, it would still have geometry data for all of the frustum slices to perform pick operations. It would be a huge bandwidth waste if each WC had to download directly from the internet as neighboring slices usually share many of the same tile data, which is where the RC comes into play. The RC’s sole purpose is to consolidate all data and only distribute parts of it to where they are needed. Broadcasting the entire scene data on the LAN would also be a waste; the far left frustum probably doesn’t care about data that only the far right frustum uses to render so why send there in the first place.

Basic multi-machine view syncing in a Google Earth Liquid Galaxy rig works in a much simpler fashion than that...

A 'master' Google Earth broadcasts it's view camera location. For Earth Client that is a UDP broadcast, for Earth API you can use a websocket echo relay.

The 'slave' Google Earths receive that camera information and have local config to set their yaw/pitch/roll offset from the master camera.


Along with camera data Earth shares some very basic state information - like timestamps and globe.

A Liquid Galaxy system will have a Squid HTTP cache in the way between the master/slaves to force content caching. Generally this is a distributed Squid running on each slave. There is no other 'content sync' between the clients.

So this fairly simple camera view sync from a 'master' to a set of 'slaves' DOES NOT share overlays, view settings, KML, content settings and a whole bunch of useful stuff! We have to do that via other means. eg KMLNetworkLinks, starting up Earth with custom config, KML paths, or using Earth API and websockets to sync other content.

Because of latency it is usually poor form to run the view master as one of the display screens (like the middle one). The master will always be a frame ahead of the slaves and this shows up as tearing when the scene moves. But for a simple setup it is okay to have a displaying master.

For systems that need 'pick operations' I've used other tools like Synergy or InputDirector to share a single keyboard/mouse across the entire display cluster (master and slaves). This is useful for general operation of the clustered computers anyway. Otherwise you have piles of keyboards and mice to deal with!

I would probably start with straight forward master/slave camera sync and build from there.

I probably missed out some details here. Very happy to answer any of your questions.

Andrew | eResearch | UWS - Liquid Galaxy Project