I am creating some visualizations using Cesium and I was wondering if it is possible to make these available to people who do not have the necessary 3d hardware on their devices. Is it possible to render Cesium on the server side and then deliver the resulting images client side and retain mouse interaction? I would imagine that it would slow everything down significantly and require some significant graphics capabilities on the server but just wondered if it is possible.
Does anyone have any experience of implementing this or have any clues as to where I might start?
This is largely a solved problem via solutions like Nvidia Grid.
That being said, rolling your own is probably possible, but I don’t think anyone has ever had a discussion on the forum regarding actually trying to implement it. It would definitely be non-trivial to do so. There are some tools out there, such as https://github.com/mikeseven/node-webgl or https://github.com/stackgl/headless-gl that might enable you to render Cesium content on the server, but then you would need to use something like websockets or webRTC in order to handle client/server communication and interaction. Depending on implementation, you might also be able to just use the Windows BrowserControl or WebKit/Blink on the server. Even if you got it to work, I’m not sure it would scale well.
With that said, if you get to it first, let us know your experience. I expect it to be some work since Cesium will not run out of the box on Node (see https://github.com/AnalyticalGraphicsInc/cesium/pull/1438). Longer-term, we would likely also reorganize Cesium into modules.
This is something I have been looking at for some time with threejs and other webGL things as well. Haven’t found great solutions yet.
ParaViewWeb may be a good server-side rendering precedent… it needs to be compiled with OSMesa to render on server if no hardware is found, and works fairly well. Not sure if some aspects of this can translate to server-side Cesium rendering?
Video will be a little more complicated. Generally, you’ll need to capture each “frame” by syncing with the render loop. There’s a thread that covers this here. You should still be able to use the above solution to render each frame to a video frame, then there should be tools available in Node for stitching the frames together and outputting a video.