I want to classify my 3d tilesets, can someone let me know what software can be used to generate classification tileset?
I see this tileset in sandbox examples
But can’t find any information on how it’s being produced.
I use BentleyContext Capture to output the b3dm tileset and I don’t see an option to create bounding boxes there. I’m also interested to know, how meta data can be added to each bounding box within the classification.
Welcome to the Cesium forum @digiiitalmb! The bounding volumes in that particular example were created by hand by creating the box of the right dimensions in Blender and then uploading that model to Cesium ion.
I’ve personally found it a lot easier to use 2D data for this type of classification. For example, if you overlay a GeoJSON (or a KML etc) on top of your 3D Tiles you can highlight particular buildings and add metadata that way. This is what we did in this blog to highlight the roads in the Melbourne photogrammetry model: https://cesium.com/blog/2018/11/05/dynamic-3d-tiles-annotations/.
I’m curious to hear what approach best works for you.
Our main issue is that we need to classify the object vertically, we need classify vents on the building rooftop and they can be stacked vertically. While GeoJson solution would be pretty good, from what I understand, it’s not going to work in this case as highlight would highlight anything that is in that area vertically?
I was looking into Cesium Analytics SDK and was wondering if certain geometries could be highlighted using custom geometries sensor?
Thanks for clarifying your use case @digiiitalmb. You are correct that classifying with a GeoJSON or any 2d polygon will highlight the entire section, as opposed to the specific 3D volume.
One thing you could do is generate the geometry using the CesiumJS API. This is how this example works:
It defines a few ellipsoid geometries to highlight the trees, and a box geometry to highlight a window on the face a building.
Do you already have known locations in 3D space for these vents and rooftops you want to highlight? If so this may be the easiest approach. Let me know if that works for you.
The Analytics SDK would be a good option if you had sensor positions, and a define range/angle for each sensor, and wanted to highlight what parts of the scene are visible to these sensors. But you don’t need it if you already know what volumes should be highlighted ahead of time.