Terrain Rendering

Effectively, yes, I think. To get good terrain quality in mountainous
areas (like Switzerland!) with a height grid, you need a very high
sampling density. To be fair to WebGL Earth, I don't think they had
access to such high density data. WebGL Earth also models the Earth as
a unit sphere in world space, and so you get precision problems at the
surface. However, these are of the order of a few meters, and so are
relatively small compared to the errors due to insufficient sample
density.

The ReadyMap SDK also uses height grids :
  http://demo.pelicanmapping.com/rmweb/webgl/tests/elevation.html

Regards,
Tom

OpenWebGlobe uses a classic Chunked LOD approach. Terrain tiles are
triangular meshes with associated metadata (e.g. maximum error) for
the Chunked LOD algorithm to decide whether to split or not. The
meshes are computed server-side, computing all the meshes for
Switzerland from 25m resolution dataset takes a few hours. The tiles
are converted into JSON format, gzip-compressed, and served statically
from Amazon's S3 (actually we proxy S3 to do referrer checks, but in
theory they could be served directly from S3).

I should take a closer look at the terrain rendering and processing code in
OpenWebGlobe. It sounds like a good match algorithmically, and the license
is compatible.

Also, don't hesitate to contact the key guy behind OpenWebGlobe,
Martin Christen at the University of North Western Switzerland. His
email is <firstname>.<lastname>@fnhw.ch, and he's very competent and
helpful.

The core chunked LOD algorithm starts here:
  https://github.com/OpenWebGlobe/WebViewer/blob/master/source/core/globerenderer.js#L489

Practically, the system works well, and it has the nice feature that
terrain meshes are simply 3D objects. The same code can then be used
to draw other models, e.g. buildings, and in theory the same Chunked
LOD/terrain tiles as models approach can be used to render Chunked LOD
cities à la Nokia Maps 3D WebGL ( http://maps3d.svc.nokia.com/webgl/
). This sort of aerially-captured city data is becoming increasingly
common, and it would be great if Cesium had the capability to display
it.

Agreed. Do you know of any open sources of city data, at least for testing?

I don't know of any open data, but I chatted to the guys from Acute 3D
  http://www.acute3d.com/
at a conference recently. They're focused on the data capture, and
would be delighted to have a viewer to demonstrate what they can do.
I'll drop them an email to see if they'd be willing to share one of
their demo data sets.

3. Performance-wise (related to 1), I'd definitely profile gzip'ed raw
arraybuffer data against more sophisticated formats like Google Body
uses. I suspect that there will be not much difference in the amount
of data transferred across the wire, and that the raw arraybuffers
will be much faster to handle in the client. See also how Nokia do
it: http://idiocode.com/2012/02/01/nokia-3d-map-tiles/ .

You may very well be right. The Google Body guys claimed a big improvement
over a simple gzipped mesh, but their data could be different from ours. I
believe there's an open source library for encoding data in their format, so
it may not be much work to try it out and see how much smaller (if any) the
data can get. My gut is that a 20% reduction in data size is worth a 20%
increase in client-side CPU time (just to make up numbers), especially if
some of that processing can be done in a web worker. But in any case,
developing our terrain engine will be an iterative process, and fancy
encodings won't be in the earliest iterations.

Indeed, and it looks like I might be wrong anyway :slight_smile:
  Google Code Archive - Long-term storage for Google Code Project Hosting.
The Google Body approach is 1/3rd the size of the 32-bit JS.

Looking forward, I think it will be a common case that terrain data
will come from multiple sources and need to be combined somewhere. As
Cesium is a globe, a global data set is required, e.g. SRTM as you
mention on the Wiki. However, our clients (effectively states and
counties) typically have much more detailled data for their local
areas. Therefore, we need to be able to use more detailed data when it
is available, and fallback to less detailled data when it is not. As
combining data sets, especially terrain tiles, is a computationally
intensive process, I'd look first a preparing the data on the server
side and serving effectively static, optimized-for-the-client files
rather than trying to combine data sets in real time either on the
server or on the client.

Agreed, especially if serving optimized meshes. When working with height
maps (despite their limitations), on-the-fly processing is a bit more
practical.

I'm curious: do you agree with my assessment that combining imagery sources
on the client is reasonable, even if combining terrain sources is not?

Yes, definitely agree that combining imagery on the client is very
reasonable, and of course there are a number of approaches (e.g.
multitexturing or rendering to intermediate frame buffers and then
compositing).

This also opens the door to doing per-pixel image processing effects
in the fragment shader, which is particularly useful when combining
image layers. For example, you may wish to change the saturation of a
bright base layer so that it doesn't overwhelm other image layers
higher in the stack.

Finally, subjectively, I think it will be very hard to make gridded
terrain data work well in a web context, basically due to the greater
bandwidth needed to provide a similar quality to triangular mesh
terrain. I might be wrong though!

I tend to agree, despite the fact that, on the surface, gridded terrain
appears more compact because only the heights are explicitly represented. I
think there might be a place for gridded terrain, anyway, though. Some
folks will be willing to accept popping artifacts and the like in order to
avoid a lengthy pre-processing step.

The lack of pre-processing is certainly an advantage, and it allows
people to get their data into Cesium quickly. Of course, in the client
you can convert gridded terrain into an triangular mesh easily (albeit
not efficiently) whereas the inverse operation is tricky. Such a crude
and inefficient operation may be sufficient for "quick start"
purposes.

A further consideration is that there is a lot of geographical data
out there in local projections, e.g. UTM or UK Ordnance Survey Grid.
These data will have to be re-projected somewhere to be compatible
with Cesium's WGS84 ellipsoid, so it may well be the case that some
server-side processing is already needed.

Regards,
Tom

I’m a bit late to the party, which is good because it sounds like you guys have this all figured out. :slight_smile:

Milo Martin gave a guest lecture in my course at Penn, and had a great quote: “only three architectures matter: mobile, mobile, and mobile.” This, of course, is an exaggeration (in software, but perhaps not hardware architecture research), but not a far stretch. Mobiles have weaker GPUs (Tegra 3 has 12 cores; a GTX 680 has 1,536), sometimes crippled GLSL compilers, weaker CPUs to run JavaScript, and less bandwidth over 3G or 4G.

Bandwidth is the killer. As trends continue, the gap between bandwidth and compute continue to widen. Our terrain implementation needs to attack bandwidth.

We have already come to the conclusion that using a proxy is probably a good thing, and converting to a typed array is also a good thing. To avoid jitter, I assume we will render tiles RTC. Our proxy can do the translations, and send the vertex buffer ready for rendering. Perhaps, we start with this, and then see the trade-offs of using Won Chun’s webgl-loader. Kevin, I can introduce you to Won. He is a very smart guy, and I’m sure would be happy to help.

I am, of course, assuming that we are serving meshes, not height fields. Serving height fields can be better bandwidth-wise, especially if the terrain is bumpy (not flat!) because only the heights are sent, not the full xyz positions or index data.

The proxy can compute the bounding sphere for culling - a trivial increase in payload for less client-side processing and presumably less L2 pollution. The proxy should cache everything if the source data’s terms of service allow. A custom server would do RTC, bounding sphere, etc. offline and store it.

Also, as part of the proxy or our server, we will need to include the credit(s) for the data for each tile, so we can show credit lines for all datasets in view on the client.

As we think about what a proxy or custom server needs to do, I want to suggest that it help us return an accurate cartographic position under a pixel on the screen. Using traditional unproject routines makes the position depend on the terrain LOD resulting in very poor answers for distant terrain.

Regards,

Patrick

my GTX680 loves Cesium, the embeded intel chips, notsomuch

Hi guys,

This question might be unrelated to the topic, if you choose to implement chunked LOD and use skirts to fill gaps, but… Combining data from multiple sources can be very challenging. The main issues I can remember at the moment are:

  1. Sampling can be done as raster (sampling at the center of the cell) or as a grid (sampling at the border).
  2. Sampling points do not coincide in different sources ( 1", 3", or 30")
  3. There are missing layers with certain resolution. (For example, we have 3” grid on one server and 30" on the other. What is in between?)
  4. Samples can be in different projections. (5m grid is certainly not in WGS84. Everything with resolution below 1" is probably in UTM or some exotic projection).

In order to solve these issues I used to reproject and/or resample data. It is a very time consuming operation if it is done on demand (without data preprocessing). I’m still not sure how to do it on the most elegant way. What will be your strategy in this project?

Regards,
Aleksandar

Hey Aleksandar,

Thanks for this useful list. I’ve run into some of these problems in the past while loading terrain from different sources for line-of-sight computations.

I think if we stand up our own terrain servers, we will have to do the lengthy reproject operations ahead of time as you describe, so that we can serve up tiled heightmaps or meshes at multiple levels of detail very quickly.

If we allow streaming terrain from ArcGIS Server or GeoServer, though, those servers may choose to reproject on the fly rather than ahead of time. In my experiments with ArcGIS Server so far (which I believe reprojects on the fly, at least for ImageServer services), the terrain streaming is slow but usable, so I think it’s still useful to support that, especially since it’s easy!

I’m not sure if I’ve answered your question directly, so let me know.

Kevin

Web3D is co-located with SIGGRAPH next month. The schedule has some interesting stuff, including a paper titled, “Simplification and Streaming of GIS Terrain for Web Clients”, whose abstract sounds quite good. I couldn’t find the paper online yet, but if you contact the authors, they may have a pre-print.

Although not relevant to terrain, I’m also interested in the 3DPIE effort mentioned at the bottom of the Web3D schedule for our model rendering.

Patrick