Folks,
We’re happy to announce plans for 3D buildings in Cesium:
The above demo is just a preview for some very cool streaming we’ll have later this summer.
Let’s use this thread as the official 3D buildings thread.
Thanks,
Patrick
Folks,
We’re happy to announce plans for 3D buildings in Cesium:
The above demo is just a preview for some very cool streaming we’ll have later this summer.
Let’s use this thread as the official 3D buildings thread.
Thanks,
Patrick
Great news! I see there is good initial coverage over many cities. This will make pairing Cesium and Street View a much better experience.
The article mentions “In the coming months, we will expand 3D building streaming in Cesium to a tiled approach capable of cities with 100,000+ buildings.”
Out of curiosity, might the approach be similar to this https://groups.google.com/d/msg/cesium-dev/5E1zEuV5H0c/FlaqE-c_YSEJ
With various Model LODs? Though the models would need to provide LOD levels if model LOD were to be supported, otherwise the tile LOD would just decide what is and what isn’t visible I assume.
It would be nice to have multiple 3D building sources, just like there are multiple imagery sources. I noticed that Open Street View 3D buildings were mentioned in the article. So zone and road overlay data comes with the 3D buildings data, I assume that’s also from CyberCity3D? So you can place those overlays over any base imagery provider?
This is great!
Like Hyper Sonic mentioned, I’m also very interested in LOD for the models, as I had some very detailed buildings. (in Quadtree/Octree Collada format for LOD).
Again, Thank you Patrick and the Cesium team for this long-waited feature.
We’re very excited to be rolling out this feature to the public over the next few months.
-mlp
I try to design a scheme for rendering 500,000+ tree models.
The scheme had be described in the previous post
https://groups.google.com/forum/#!topic/cesium-dev/E5TJYSe16qY
(I find the way for unloading some of the loaded czml files if needed)
I put the primer result in the web page:
http://118.163.99.116:8080/Apps/3DModels.html
It just for a try, for a long-term, I shall follow the streaming scheme like image/terrain tile provider.
Patrick Cozzi於 2015年4月28日星期二 UTC+8上午6時08分03秒寫道:
Cool demo! However when the camera is close sometimes nearby trees pop out of existence. I’m assuming there’s only one model LOD level. Apparently Chrome won’t let any one tab exceed 500 to 600 Mega Bytes of RAM unfortunately, which I’m guessing is the cause of the tree disappearance. Google Earth won’t let you exceed 1 Giga Byte of RAM. I’d buy more RAM if I knew that web browsers would take advantage of the increased RAM.
That is caused by some mistakes in my code. I had change that. Please try again.
@Hyper
So zone and road overlay data comes with the 3D buildings data, I assume that’s also from CyberCity3D?
The building data in the Seattle demo is from CyberCity3D. I believe the zone and road overlays are from Seattle’s public data portal.
So you can place those overlays over any base imagery provider?
Yes, the buildings, vector overlays, and base imagery can be mixed and matched.
@Hyper/Narco/Kevin
The streaming format will be tiled Hierarchical Level of Detail so it will only load a few buildings in the distance and then more as the user zooms in. We’ll publish the format as soon as I have something a bit more concrete.
Patrick
No more disappearing trees in the updated demo. It looks very good! Looking at Chrome’s task manager the tab went up to 650 Mega Bytes at one point, with GPU process up to 300 Mega Bytes. With alot of trees in view my framerate dropped like a rock though, single digit fps, but then again I need a computer upgrade! The tree streaming code is on the main thread I assume, there seems to be a few pauses. Also my network bandwidth was quite low to the server, probably with high latency as well as the server is over 11 Mega Meters away.
Thanks for the information Patrick. That’s awesome that you can mix and match all those data sources. I assume for the first incarnation each building will have one model LOD version, but can suggest at what tile LODs it will be visible at, but as you’ve mentioned the format is not concrete just yet.
The binary collada2gltf.exe can only generate
the base64 encode on “shaders” and “buffers” . The texture images are still via the
uri. This causes more files be
transmitted on the network. For example, the model will have three files be
transmitted (one gltf and two texture images)
“images”: {
“ID18”: {
“uri”: “tex/g2_27048.jpg”
},
“ID31”: {
“uri”: “tex/1Fre2_77.jpg”
}
How to making the
texture images are also base64 encoded?
I have
tried to implement a tiled hierarchical LOD scheme
(video as the following link).
I think
the single gltf file will improve the overall performance!
The idea of hierarchical LOD
scheme:
The scene is divided into tiles with five
different scales. The links of building models are stored on the CZML files, named with
S?X???Y???, according to their volume size (S?) and geo-position(X???Y???). That is, the larger buildings
are on the higher scale, and the small building on the lower scale. By the tiled
Hierarchical LOD scheme, the scene only shows a few buildings fetch from the higher scale when the view is in the distance from the ground. More small buildings
are shown as the user zooms in.
More coverage of building models is currently
produced according to the LOD scheme which is still need to be improved.
That’s odd, CesiumMilkTruck.gltf has the image in base64 form. Although that’s a png I’d imagine it should work for jpg as well.
Cool video! The volume idea sounds like a good idea, however, what about many small buildings bunched up taking up more volume than the larger buildings?
I think model LOD is very important, with all models having at least a very simple LOD form along with the standard LOD form just so that as least you know something is there instead of empty space.
Has anyone done performance tests with simple 6-sided buildings with very low res textures?
-One test just see how many can be rendered at one time before degrading performance (no streaming)
-Another test on how many simple buildings can be streamed into memory on an average internet connection and computer (no rendering)
-Then a final test combining streaming and rendering
Hi,
@Kevin Sun I had the same problem with the collada2gltf.exe converter. The converter can integrate the images into the gltf file. I just had to set the working directory of the process to the current collada file, so the process finds the image. It will work for png and jpg.
But i think you will be disappointed, decoding the base64 images is not really fast. I think this will only increase performance if you have many really small images.
I personally had better results with the binary GLTF format. See this pull Request. https://github.com/AnalyticalGraphicsInc/cesium/pull/2659
But you have to create your own process to generate the bgltf format.
Binary GLTF should help alot. Base64 only uses 6/8 of a byte while binary uses the entire byte. Also Base64 has to be decoded while binary does not. It seems that resources will be in separate files, but all zipped up into a single gzip.
The Seattle demo looks great! I'd love to see the capability to color code buildings based on the parameter values. For example, every building over a certain volume or height turn a certain color... or color code "office" buildings blue and "residential" buildings red (assuming custom parameters will be added).
Will it be possible to split up the buildings by levels? To me, this is the real value of looking at the buildings in 3D... the ability to see several levels at the same time.
Also, how can a synergy be created with multiple clients adding data to the individual buildings? A lot of data fields could be useful to multiple people, such as "year built", "address", "# of floors", etc.. Can the platform be set up to allow some universal shared building parameters? This would de-duplicate the inputting efforts of a whole bunch of people who may be using the data.
Chris,
Thanks for the interest.
I’d love to see the capability to color code buildings based on the parameter values.
Yes, me too. The first step is streaming building geometry and textures, but the real usefulness is going to come with the ability to interact with buildings: get a building’s id, highlight a building, show/hide a building, color code, etc.
Will it be possible to split up the buildings by levels?
What exactly do you mean? Show/hide buildings based on a parameter?
Also, how can a synergy be created with multiple clients adding data to the individual buildings?
Cesium could be a platform to build a crowdsourced building metadata database, but it’s not in the scope of the initial release. In the Seattle demo, when a building is clicked, we use its longitude/latitude to query the CyberCity 3D REST API, which has metadata, and also query the Bing REST API reverse geocoder for the address.
Patrick
By levels, I mean separate floors like Cube Cities has done. I envision that this would work by inputting the number of floors for a particular building and letting the software divide them equally over the height of the building. Then individual floor elevations could be adjusted as needed.
Is there some kind of GUID for each individual building that can also be made visible in the parameters? What would be the best way to report errors in addresses, building geometry, etc. so they can be updated/corrected?
Hi,
By levels, I mean separate floors like Cube Cities has done.
Ah. Initially, the way I envision this working is that the entire city dataset is streamed, but individual buildings can be turned on/off, so a building of interest can be turned off from the streaming dataset, and then any custom 3D model or geometries - with any custom operations - could replace it. If it isn’t too application-specific, we can look at adding helpers for the customization.
Is there some kind of GUID for each individual building that can also be made visible in the parameters? What would be the best way to report errors in addresses, building geometry, etc. so they can be updated/corrected?
The initial dataset is from CyberCity3D. Each building has a unique ID that can be used to query their web services for metadata.
As for reporting errors, I imagine you are talking about for a crowdsourced dataset? I haven’t given it any thought yet, but, for example, if we import OSM buildings, I am all for interoping with their reporting/editing system.
Patrick
Folks,
Kevin DeVito from CyberCity 3D wrote an article about the Seattle demo we recently posted: 3D Streaming Maps: Enhancing GIS Assets Streaming 3D Smart Buildings.
Although the article recommends to only texture select buildings to make them standout, Cesium will support fully textured 3D cities. Basically once a leaf tile is reached, first its geometry will stream in, then its textures will follow.
I’m curious about people’s use cases for textured vs. non-textured buildings. Let me know what you think.
Patrick
Awesome stuff! By leaf tile I assume you mean a tile whom’s children / grandchildren / etc won’t add any more terrain detail even if it were to continue subdividing, which also assumes that 3D buildings are working with the same tiles as terrain data. It’s difficult to tell with Seattle as I suspect terrain isn’t far off from the ellipsoid, but I wonder if the buildings clamp to terrain when terrain is switched.
So all 3D buildings that are visible will also be streamed textures, not just nearby ones? On this particular demo it seems that once a building is loaded into memory it stays there as with subsequent visits they popup immediately, though with the first visit they load gradually not causing any framerate hicups.
Hi Hyper,
which also assumes that 3D buildings are working with the same tiles as terrain data.
I don’t anticipate using the same tiles or tiling scheme as terrain. For example, if we take into account building density and the complexity of individual buildings, we can come up with non-uniform subdivisions that are more optimal than the traditional uniform subdivision used by a vanilla quadtree. I’ll write some tech articles in time.
It’s difficult to tell with Seattle as I suspect terrain isn’t far off from the ellipsoid, but I wonder if the buildings clamp to terrain when terrain is switched
In the Seattle demo, each building is an individual model. This is fine for a few thousand models, but will not scale to the 100,000+ buildings we want to support (well, we should handle 1,000,000+ without problem).
The Seattle demo also does not account for terrain, but this is something we will account for in the final version. It could be done offline as a preprocess or at runtime. There are tradeoffs for either that we need to think through.
So all 3D buildings that are visible will also be streamed textures, not just nearby ones?
For buildings that include textures, first the geometry for the building will load, then as the viewer zooms in, the texture will load.
Patrick