Hello,
I would like to align non-georeferenced models (loaded as Cesium3DTileset) to real-world positions using two control points: two points in the model and two corresponding points in the real world.
My initial approach was as follows:
I defined the real-world coordinates using Cartesian3.fromDegrees and the model coordinates using, for example:
I then translated the model so that ModelPos1 overlapped with WorldPos1.
Next, I calculated the distance between ModelPos1 and ModelPos2 and the distance between WorldPos1 and WorldPos2 to determine the scaling factor.
Now, I need to rotate the model around ModelPos1 so that ModelPos2 aligns with WorldPos2. How can I achieve this? Alternatively, is there a better way to align the model using these four points?
In 2D, that transformation would be uniquely identified by the two source- and target points. But in 3D, there’s an additional degree of freedom: The model could be rotated around the line that connects these two points.
I assume that there is some concept of an “up-direction” that should be preserved, is that correct? (If not, it would be necessary to describe what the desired orientation should be…)
However, I can roughly imagine what you are trying to achieve. There still are a few unknowns or details. And these details include the question about how the geo-position of the input model was determined to begin with (i.e. does the tileset have a tileset.root.transform that is not the identity matrix?)
But here is an example Sandcastle, with a few assumptions, but also with a few code snippets that may be helpful for the overall goal:
You’re right about the overconstraint. But it is what I was asked to implement. The inaccuracies between your red and orange arrows should be negligible.
Thank you for thinking along, the pictures and the sandbox sample solution. This will help me a lot.
(Btw, the tileset has identity as root.transform)