Summary: I have a custom imagery tiler server-side. This tiler takes large GeoTiff files and tiles them, baking in the raw Float32 value into the RGBA red value. It uses a known min/max to normalize the data and achieve a “raw” imagery tile. Then in Cesium, I have a custom ImageryProvider (CustomTemplateImageryProvider, based on URLTemplateImageryProvider) which passes the image to a “renderer” along with a palette that is confined to the known min/max. The renderer, known as TileRenderer, applies a FS/VS to translate the raw RGBA into a paletted RGBA.
1 (raw) vs. 2 (paletted)
This all works fantastic. It’s something we intend on open sourcing once the custom Cesium modules are more refined to be usable outside of our application.
The problem: Because I’m calling TileRenderer from requestImage inside my ImageProvider, the “paletted” tile is cached by Imagery. I need to move my TileRenderer call toward the end of the stack so that Imagery caches the raw tiles, yet displays the paletted tiles. I’d like to palette the tile immediately before it’s displayed. This will allow me to customize pickFeatures to extract the raw value from the RGBA. In turn, I can allow improved point inspection (not XHR) and also generate “value labels” to overlay onto the imagery. My current method inside TileRenderer involves gl.texImage2D() which requires a canvas/image element, neither of which I appear to be able to access this far into the tile rendering process.
var imageTexture = gl.createTexture();
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, source);
with source being the HTMLImageElement from ImagerProvider.requestImage().
I’ve tried to familiarize myself with the Cesium render process. I see the texture creation in ImageryLayer, along with the reprojection. The Imagery is then cached and recalled from getImageryFromCache. I thought about using finalizeReprojectTexture() to modify the texture, but I still believe this will result in the paletted texture being cached by ImageryLayer.
As I try to work through this, I’m having an issue using textures which are bound to the viewer.canvas context. I’ve created a separate canvas (512x512 / tile size) in TileRenderer for all of the tile palette rendering. For that reason, I can’t bindTexture() the _imageryCache texture unless I use the viewer.canvas. Attaching viewer.canvas to TileRenderer results in odd looking tiles because the 512x512 width doesn’t apply to viewer.canvas. I’m sure I’m missing a canvas where I can perform tile tasks like this without running into context issues, or possibly a queue function where altering the canvas side doesn’t disrupt the map display.
Please note: I’m a WebGL newbie. I’ve used a document.createElement(‘canvas’) inside TileRenderer to render the tiles using the gl.texImage2D() call above. I re-use that canvas for each tile, so I don’t exhaust any limitations. If there’s a more ideal approach, I’m all ears. My FS/VS uses 0->1 normalization (not web merc, so the web merc texture might not be usable here?) though I’d like to use Web Merc coordinates to extract values via pickFeatures.
Thanks in advance for any input you might be able to provide.