OpenWebGlobe uses a classic Chunked LOD approach. Terrain tiles are
triangular meshes with associated metadata (e.g. maximum error) for
the Chunked LOD algorithm to decide whether to split or not. The
meshes are computed server-side, computing all the meshes for
Switzerland from 25m resolution dataset takes a few hours. The tiles
are converted into JSON format, gzip-compressed, and served statically
from Amazon's S3 (actually we proxy S3 to do referrer checks, but in
theory they could be served directly from S3).
I should take a closer look at the terrain rendering and processing code in
OpenWebGlobe. It sounds like a good match algorithmically, and the license
is compatible.
Also, don't hesitate to contact the key guy behind OpenWebGlobe,
Martin Christen at the University of North Western Switzerland. His
email is <firstname>.<lastname>@fnhw.ch, and he's very competent and
helpful.
The core chunked LOD algorithm starts here:
https://github.com/OpenWebGlobe/WebViewer/blob/master/source/core/globerenderer.js#L489
Practically, the system works well, and it has the nice feature that
terrain meshes are simply 3D objects. The same code can then be used
to draw other models, e.g. buildings, and in theory the same Chunked
LOD/terrain tiles as models approach can be used to render Chunked LOD
cities à la Nokia Maps 3D WebGL ( http://maps3d.svc.nokia.com/webgl/
). This sort of aerially-captured city data is becoming increasingly
common, and it would be great if Cesium had the capability to display
it.
Agreed. Do you know of any open sources of city data, at least for testing?
I don't know of any open data, but I chatted to the guys from Acute 3D
http://www.acute3d.com/
at a conference recently. They're focused on the data capture, and
would be delighted to have a viewer to demonstrate what they can do.
I'll drop them an email to see if they'd be willing to share one of
their demo data sets.
3. Performance-wise (related to 1), I'd definitely profile gzip'ed raw
arraybuffer data against more sophisticated formats like Google Body
uses. I suspect that there will be not much difference in the amount
of data transferred across the wire, and that the raw arraybuffers
will be much faster to handle in the client. See also how Nokia do
it: http://idiocode.com/2012/02/01/nokia-3d-map-tiles/ .
You may very well be right. The Google Body guys claimed a big improvement
over a simple gzipped mesh, but their data could be different from ours. I
believe there's an open source library for encoding data in their format, so
it may not be much work to try it out and see how much smaller (if any) the
data can get. My gut is that a 20% reduction in data size is worth a 20%
increase in client-side CPU time (just to make up numbers), especially if
some of that processing can be done in a web worker. But in any case,
developing our terrain engine will be an iterative process, and fancy
encodings won't be in the earliest iterations.
Indeed, and it looks like I might be wrong anyway
Google Code Archive - Long-term storage for Google Code Project Hosting.
The Google Body approach is 1/3rd the size of the 32-bit JS.
Looking forward, I think it will be a common case that terrain data
will come from multiple sources and need to be combined somewhere. As
Cesium is a globe, a global data set is required, e.g. SRTM as you
mention on the Wiki. However, our clients (effectively states and
counties) typically have much more detailled data for their local
areas. Therefore, we need to be able to use more detailed data when it
is available, and fallback to less detailled data when it is not. As
combining data sets, especially terrain tiles, is a computationally
intensive process, I'd look first a preparing the data on the server
side and serving effectively static, optimized-for-the-client files
rather than trying to combine data sets in real time either on the
server or on the client.
Agreed, especially if serving optimized meshes. When working with height
maps (despite their limitations), on-the-fly processing is a bit more
practical.
I'm curious: do you agree with my assessment that combining imagery sources
on the client is reasonable, even if combining terrain sources is not?
Yes, definitely agree that combining imagery on the client is very
reasonable, and of course there are a number of approaches (e.g.
multitexturing or rendering to intermediate frame buffers and then
compositing).
This also opens the door to doing per-pixel image processing effects
in the fragment shader, which is particularly useful when combining
image layers. For example, you may wish to change the saturation of a
bright base layer so that it doesn't overwhelm other image layers
higher in the stack.
Finally, subjectively, I think it will be very hard to make gridded
terrain data work well in a web context, basically due to the greater
bandwidth needed to provide a similar quality to triangular mesh
terrain. I might be wrong though!
I tend to agree, despite the fact that, on the surface, gridded terrain
appears more compact because only the heights are explicitly represented. I
think there might be a place for gridded terrain, anyway, though. Some
folks will be willing to accept popping artifacts and the like in order to
avoid a lengthy pre-processing step.
The lack of pre-processing is certainly an advantage, and it allows
people to get their data into Cesium quickly. Of course, in the client
you can convert gridded terrain into an triangular mesh easily (albeit
not efficiently) whereas the inverse operation is tricky. Such a crude
and inefficient operation may be sufficient for "quick start"
purposes.
A further consideration is that there is a lot of geographical data
out there in local projections, e.g. UTM or UK Ordnance Survey Grid.
These data will have to be re-projected somewhere to be compatible
with Cesium's WGS84 ellipsoid, so it may well be the case that some
server-side processing is already needed.
Regards,
Tom