My use-case is passing floating-point data encoded in RGB tiles (similar in format to https://docs.mapbox.com/help/troubleshooting/access-elevation-data/#mapbox-terrain-rgb) to the client and then being able to control the rendering of said data given some user parameters.
I see a demo where a globe material is applied using the terrain as a data source:
Can the same be done using tiles from an individual layer as a data source?
Basically, I’m wondering if there’s a way to specify some imagery data sources, apply a pipeline of operations to it changing how it’s rendered, and have it render as just another layer on the globe (respecting stacking order and everything else).
Thanks for any help on this!
Sounds like this could be a great use case to expose custom shaders for imagery layers. Currently I think you would have to write a custom material and fetch the image tiles, and pass the individual images as textures to your custom material, which would be a bit hacky.
Can you describe your use case a bit more? What kind of operations would you want to do on the imagery layer? Would it be like filtering it to only show values above/below a certain threshold, or would it be adding it to another layer, or something else?
I’m using this information to inform a feature request on GitHub (you can also go ahead and open that feature request yourself if you’d like).
We have multiple use-cases in mind but the starting use-case is exactly what you describe – the ability to only show values within a certain range and adjust the display of that data by being able to switch color palettes.
I do think the ability to combine multiple imagery sources and operate on that could be quite powerful too. One could imagine combining say elevation, sunlight and temperature data to highlight plant growing zones.
The specific functionality I’m trying to approximate in Cesium is similar to that demo’d here in Openlayers:
and documented here:
Basically it’s a imagery source that itself can take in multiple imagery sources as inputs and run a function on each pixel outputting a new pixel.
Whereas the above works with canvas and webworkers, I’m sure access to a custom webgl shader for imagery layers would make this even more powerful!
Those all do sound like incredibly useful cases - I’ve opened a feature request documenting this here: https://github.com/AnalyticalGraphicsInc/cesium/issues/8110
Thanks so much for bringing this up!