Implement a self-rendered raster layer

I’m trying to implement a layer which displays raster data sent by a server.

The data sent by the server protocol does not have a built-in support in the widely-used browser (this is a jpeg2000 data).

Thus I’m decoding the data by myself and let it to Cesium to show.

What makes it a little complicated:

  1. The server is stateful, thus both client and server should maintain a channel. The channel is associated with a single region of interest. The region may change over time, but in every time point there is only single region for which the server sends data on the channel.

I can use some channels in the session, but the server does not perform well with more than very small amount of channels.

  1. The region of interest is of uniform resolution (thus problematic for 3D).

  2. The server supports progressive sending of data gradually enhance the quality (“quality layers” in jpeg2000), a property which I want to use due to very low network resource available.

  3. The decoding is heavy in terms of CPU time.

As a first stage I’ve implemented an ImageryProvider which just creates a channel for each tile requested by the rendering engine. It worked but created too many connections and I didn’t enjoy the progressive rendering. In addition the performance was poor, a problem which was almost resolved by implementing a priority mechanism which first decoded tiles in the view area of the Cesium viewer.

Next I implemented a self-rendered raster “layer” which change the region of interest of the channel according to the view area. Then the multiple channels problem was resolved and I enjoyed progressive rendering. However I encountered the following problems:

a. The method I used to show the decoded pixels was to implement an imagery provider which shows a single Canvas with the decoded pixels. Each time the image was updated (repositioned or progressively-decoded) I had to remove the old imagery provider and replace it with a new one. I guess that’s not the correct way to do such things, and it may cause some bad behavior like wrong z-ordering when replacing the old provider with a new one, etc. I guess some of these issues may be resovled by using primitives with Image material, but then I have to use the data URL form of images. I’m not sure how it will affect the performance, because it will cause a lot of conversions from canvas into data URLs.

b. I had to write special code to understand the view area in order to send it to the server (using pickEllipsoid and similar functionality). I guess this code is a duplication of something that is done within Cesium engine. In addition I saw in some discussions that pickEllipsoid is not supported in 2D. Generally I was very happy to have a function which calcualtes the view area for me, rather than implementing that code by myself.

c. The way I implemented it raises an API issue: As opposed to the nice API of Cesium to add and remove imagery provider (addImageryProvider() method and removeLayer() ), in my implementation the user need to use only the methods I expose to him (for example a method add() which accepts the Viewer as argument).

d. In 3D mode, when the resolution is not uniform the image is not sharp in the close region. I know that’s an inherent problem because of the way that my server works, just point it out.

I think what I’m really missing here is a way to implement a plugin which is more powerful than the interface of ImageryProvider: Implementing a self-rendered raster layer, which receives view area change events from the render engine and can decide when and how to refresh its tiles.

Another alternative (which is even better for me, but is less reusable by others I guess), is to expose a list of the tiles in the view area to the ImageryProvider implementation.

In the short time I’m working with Cesium I found it great and useful. I would be happy if you tell me what is the correct way to cope with those problems so I can have Cesium works with that scenario too.