When using obj2gltf or gltf-pipeline are you passing in any flags that cause normals to be generated?
I believe the models produced by ContextCapture do not have normals by default, which is actually good for photogrammetry models where lighting is already part of the texture. So if you are passing in any generate normals flags, try without.
I wanna ask other questions about how to convert gltfto b3dm…
I find that you use gltf-pipeline to convert to b3dm,but in github ,gltf-pipeline looks cannot convert gltf to b3dm,you can see from the pictures below
so can you tell me specifically the entire process?
Actually what I wanna do is to subdivide one big 3D model data(500mb 3dmax data) to 3D Tiles. because I have seen that Cesium 3D Tiles has ability to generate 3D models with level-of-detail.
The reason I wanna to transform because when I load my high resolution 3D models from simple photographs (size:140G) over high sight, it occupies very high memory(memory using continuously rising,then crash). Then we realize it because when we produce data using contextcapture, the first level of lod may set not small enough(now the first level is 16), we have to reproduce the data to make more levels(use bentley CC to generate 3D tiles need very high hardware configuration and also plenty of time) and it seems that this is the only way to solve the problem.
Besides loading 3D models from simple photographs, we also wanna load other 3D models like BIM, but loading massive data one-time , similar problem must arise.So I wanna subdivide 3D model data using 3D tiles.Can u give me some suggestions?
Being able to subdivide massive models is one area that the Cesium team is actively working on. The process can become very involved especially when dealing with decimation and choosing good tile splits. The simplest solution would be to subdivide your data into a grid, but this approach has many downsides and won’t run very well.