Why can’t the dynamic camera position be changed programmatically?

Methods such as changing the camera’s transform.position or using globeAnchor.SetPositionLongitudeLatitudeHeight() do not seem to have any effect. The only way to adjust the Cesium dynamic camera position is manually through the ‘Cesium Globe Anchor’ position properties in the Inspector. At launch, the height is automatically set to 19999999, ignoring both code and Inspector settings. Is it possible to move the camera programmatically in the latest version of Cesium? Does it work when the camera is positioned to look at the globe from space?

It seems Cesium initialization is completely broken:

using UnityEngine;
using Unity.Mathematics;
using CesiumForUnity;

public class CesiumCameraScript : MonoBehaviour 
{
    private CesiumGlobeAnchor globeAnchor;

    // initial position values
    public float longitude = 0.0f;
    public float latitude = 0.0f;
    public float height = 8_000_000.0f;

    void Start() 
    {
        // Get the CesiumGlobeAnchor component from the camera object
        globeAnchor = GetComponent<CesiumGlobeAnchor>();
        if (globeAnchor == null)
        {
            Debug.LogError("CesiumGlobeAnchor component is not found on this GameObject.");
            return;
        }
        // Start checking for initialization every 0.05 seconds
        InvokeRepeating(nameof(CheckInitialization), 0f, 0.05f);
    }

    void CheckInitialization()
    {
        if (UpdateCameraPosition())
        {
            CancelInvoke(nameof(CheckInitialization));
        }
    }

    bool UpdateCameraPosition()
    {
        Debug.Log($"Camera transform position at {transform.position.ToString()}");
        Debug.Log($"Camera Rotation: {transform.rotation.eulerAngles}");

        Quaternion targetRotation = Quaternion.Euler(90, 180, 0);
        if (Quaternion.Angle(transform.rotation, targetRotation) < 1e-6f)
        {
            Debug.Log($"Camera Rotation: OK");
            return true;
        }

        globeAnchor.longitudeLatitudeHeight = new double3(longitude, latitude, height);
        transform.localPosition = new Vector3(0, 0, 0);
        transform.rotation = targetRotation;
        
        Debug.Log($"Camera transform2 position at {transform.position.ToString()}");
        Debug.Log($"Camera Rotation2: {transform.rotation.eulerAngles}");

        Debug.Log($"Camera Rotation: ERR");
        return false;
    }
}

And it’s logging a series of issues:

Camera transform position at (0.00, 0.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:40)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation: (348.32, 80.11, 260.11)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:41)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera transform2 position at (0.00, 0.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:55)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation2: (90.00, 180.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:56)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation: ERR
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:58)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera transform position at (0.00, 0.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:40)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation: (348.32, 80.11, 260.11)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:41)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera transform2 position at (0.00, 0.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:55)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation2: (90.00, 180.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:56)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation: ERR
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:58)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera transform position at (0.00, 0.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:40)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation: (90.00, 180.00, 0.00)
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:41)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

Camera Rotation: OK
UnityEngine.Debug:Log (object)
CesiumCameraScript:UpdateCameraPosition () (at Assets/CesiumCameraScript.cs:46)
CesiumCameraScript:CheckInitialization () (at Assets/CesiumCameraScript.cs:31)

When we set the rotation and read its value, we sometimes see the correct output, like ‘Camera Rotation2: (90.00, 180.00, 0.00).’ However, in subsequent function runs, the value can differ significantly, showing ‘Camera Rotation: (348.32, 80.11, 260.11).’ This suggests that Cesium’s behavior is unstable and unpredictable during initialization, leading to unexpected results from any operations performed at this stage. Furthermore, there appears to be no reliable way to detect when initialization is complete.

Not sure what you mean by “initialization” or how it might be broken. If you’re referring to the globe anchor updating its position based on the transform, this happens every time the globe anchor receives OnEnable, Reset, or OnTransformParentChange messages from Unity. You can also manually call CesiumGlobeAnchor.Sync if you need it to happen immediately.

In your code, you are setting the longitudeLatitudeHeight property of the CesiumGlobeAnchor, which updates the transform of the object. You are then resetting that object’s transform back to (0, 0, 0), which places it at the location of the CesiumGeoreference. This is probably the reason you’re seeing issues with positioning.

As for rotation, this is intended behavior - you have “Adjust Orientation For Globe When Moving” checked. Here’s the description of that property:

The Earth is not flat, so as we move across its surface, the direction of “up” changes. If we ignore this fact and leave an object’s orientation unchanged as it moves over the globe surface, the object will become increasingly tilted and eventually be completely upside-down when we arrive at the opposite side of the globe.

When this setting is enabled, this component will automatically apply a rotation to the game object to account for globe curvature any time the game object’s position on the globe changes.

This property should usually be enabled, but it may be useful to disable it when your application already accounts for globe curvature itself when it updates a game object’s position and orientation, because in that case the game object would be over-rotated.

If you uncheck this box, you shouldn’t see issues with your camera rotation changing!

As an aside, you should store your longitude and latitude values as doubles rather than floats. The imprecision of a 32-bit float can result in differences of meters in some cases. If you’re just trying to look at the globe in space, you should be fine with floats, but if you get up close to the surface this can start to be an issue.

Hi @azrogers, thank you for your response! Unfortunately, the ‘Adjust Orientation For Globe When Moving’ option doesn’t solve the issue. When unchecked, we lose the ability to rotate the globe and can only move it. For the interaction style of Google Earth Pro (at a large scale), we need to fix the camera’s position and rotate the globe by latitude and longitude coordinates.

From what I understand, there’s currently no way to rotate the camera around the globe without performing full 3D calculations for both the georeference and camera objects. As a result, rotating the globe itself seems to be the only viable approach. Applying a rotation using Quaternion.Euler(90, 180, 0) to the Cesium Dynamic Camera (it needs to be applied to transform.rotation, as it doesn’t work with transform.localRotation by some reason) allows us to view the globe. Using any other rotation results in the globe not being visible.

However, when zooming and rotating by setting globeAnchor.longitudeLatitudeHeight = new double3(longitude, latitude, height);, it ends up moving the camera itself. This means we need to fix the camera’s position, as changing the camera’s transform changes the globe view.

In other words, if the camera is a child of the georeference object, and the latitude/longitude positioning only applies to the georeference (not the camera directly), it creates circular dependencies and initialization issues. If the camera could instead be the parent object, or if the rotation were possible for the camera, these issues would be avoided. But including the globe’s position and relief to calculate the camera positions and rotations relative to the globe just for globe zooming and rotation looks overcomplicated.

I’ve tried disabling the Cesium Camera Controller, but it doesn’t seem to help. The controller appears intended for certain types of camera movement, but there’s no documentation on how to set up initial positions and rotations for georeference and camera objects to try it. Anyway, the available input actions are quite limited—basic interactions like mouse scroll for zooming and rotation or pinch gestures for touchscreens are not provided (actually, I have no idea why “Speed Change Action” or “Speed Reset Action” are included when there are no actions to zoom and rotate the camera around some other object).

Probably, questions like these have been raised and answered multiple times, but I’m struggling to find clear answers. I noticed that Cesium’s documentation only once mentions the Dynamic Camera, stating, “Stay tuned for a future tutorial on using the Dynamic Camera to transition between global locations,” but provides no further details (Cesium for Unity Georeferenced Sub-scenes – Cesium). I also haven’t found information on the forum addressing the ‘chicken-and-egg’ issue with georeference and camera initialization.

Hi @MBG,

There’s a lot to unpack here, and it’s a little hard to decide where to start. But maybe it would help to explain a bit about how an object with a CesiumGlobeAnchor interacts with a CesiumGeoreference. It’s possible you already know this, but I think making sure we have this baseline of shared understanding will be useful.

A CesiumGeoreference defines where on the globe the Unity world coordinate system is located. Specifically, the Unity coordinate system’s origin (0,0,0) will be at that location on the globe, and the Unity +X axis will point East, the +Y axis will point up, and the +Z axis will point North.

An object with a CesiumGlobeAnchor (which includes DynamicCamera) is positioned and oriented relative to the globe, not relative to the Unity coordinate system the way normal objects are. Which means that if you change the CesiumGeoreference origin, all globe anchored objects will get a new Unity location and orientation such that their location and orientation relative to the globe is unchanged. If your camera has a CesiumGlobeAnchor, then changing the CesiumGeoreference will have no impact whatsoever on the view of the globe.

When you change the Unity transform of an object with a CesiumGlobeAnchor, Cesium for Unity will eventually compute a new globe location and orientation from it. I say eventually because Unity does not provide a notification for this, so we have to periodically poll for it in a coroutine. This can sometimes lead to inconsistencies that manifest as weird behavior. As Ashley mentioned, the best thing to do is to call Sync on the CesiumGlobeAnchor immediately after changing the transform, which will ensure the globe-relative location and orientation are updated immediately.

Everything I’ve just described is very much by design. If you’re trying to “spin the globe” by changing the CesiumGeoreference, you must use a camera without a CesiumGlobeAnchor.

If I were going to write a Google Earth (or CesiumJS) style camera system for Cesium for Unity, though, I definitely would not try to do it by manipulating the CesiumGeoreference. Instead, I would position and orient the camera in globe-fixed ECEF coordinates based on input, and set the new values on the camera’s CesiumGlobeAnchor. I would let the CesiumGeoreference follow the camera via the CesiumOriginShift component, because this will give us the best precision.

Just in case it’s helpful and you haven’t already seen it, there’s more detail about globe anchoring in this tutorial:

Hi @Kevin_Ring

Thank you — this explains a lot! If polling is needed for implementation, that seems reasonable.

Yes, now I see that it’s mandatory.

It seems reasonable to apply ‘True Origin’ for the Cesium Georeference and use the Cesium Globe Anchor to rotate the camera, right? This way, we can automatically track the camera to maintain a consistent view of the globe as follows:

public class CesiumCameraLookAtScript : MonoBehaviour
{    
    void Start()
    {
        transform.LookAt(new Vector3(0, 0, 0));
    }
...

However, in this case, we need to use CesiumWgs84Ellipsoid.LongitudeLatitudeHeightToEarthCenteredEarthFixed and TransformEarthCenteredEarthFixedPositionToUnity transformations to place objects with metric sizes on the globe accurately. Since these functions are missing on Android and maybe some other platforms (‘The native implementation is missing, so CenteredFixedToLongitudeLatitudeHeight cannot be invoked’) , how can we convert geographic coordinates to ECEF positions and then ECEF positions to Unity World positions?

It seems reasonable to apply ‘True Origin’ for the Cesium Georeference and use the Cesium Globe Anchor to rotate the camera, right?

I wouldn’t recommend using True Origin for tilesets where the true origin is the center of the Earth (which is most of them). The problem is that Unity uses a single-precision coordinate system, which means coordinate values near the surface are very large, which can and will lead to jittering artifacts.

As I mentioned before, my recommendation is to let the CesiumOriginShift keep the CesiumGeoreference origin near the camera, do your math in ECEF, and let the CesiumGlobeAnchor convert it to Unity coordinates.

Since these functions are missing on Android and maybe some other platforms

This is surprising. I’ll reply to your other thread.

Thanks, I’m already quite surprised by the imprecise Unity coordinates; it creates a lot of hassle.

It’s impossible for “CesiumOriginShift to keep the CesiumGeoreference origin near the camera” when the CesiumGeoreference is placed on the globe surface, and the camera is far away in orbit to view the entire globe.

“Do your math in ECEF” means we wouldn’t use Cesium coordinate calculations but instead would need to download the ellipsoid model and topography, perform all the calculations ourselves, and use Cesium only to download topography and map tiles. Yes, that is possible, but if we perform all the coordinates processing in our code, what is the reason to load a large package like Cesium just for tile downloading when that only requires a bit of code?

I’ve tried Cesium Cartographic Origin, and it made it impossible to place the camera in the correct position to view the globe from orbit using known center coordinates like transform.LookAt(new Vector3(0, 0, -6378137));. In Game window, the globe is not visible after pressing ‘Play’. Changing ‘Clipping Planes Far’ between 1e+07 and 1e+08 makes the globe visible, but no single predefined value works. It seems that ‘Cesium Origin Shift’ might conflict with the camera position setup, or there could be another issue, but the camera position sometimes changes, ignoring manual coordinates and rotations set in the Inspector and in the code. It appears that the Cesium camera moves randomly and is not controllable. I considered placing a fake object at the globe center and using it as a target for the camera with transform.LookAt(target.transform), but the same problem occurs—the camera position remains unpredictable.

On the other hand, using “True Origin” the camera placement work but caused issues where the necessary coordinate transformations aren’t supported on the Android platform, even though they work well on macOS and iOS. The float coordinate precision is sufficient for globe accuracy when the camera is positioned in orbit, but it’s not working across all supported platforms, which prevents it from being a usable solution. And, as you can see, none of the other methods work either. Maybe I need to follow your advice above and handle all the calculations myself, removing Cesium—but that’s certainly not the answer I was hoping for.

By the way, there’s no straightforward example for a globe view like Google Earth Pro provides. Perhaps you found it difficult to achieve due to the issues listed above? It would be helpful to give us a heads-up.

I can show the steps to reproduce, if you’d like. Create a simple project with Cesium Georeference defined as

The scene:

And Play mode with the camera script:

using UnityEngine;

[ExecuteAlways]
public class CameraLookAtScript : MonoBehaviour
{    
    void Start()
    {
        Debug.Log("CesiumCameraLookAtScript");
        transform.LookAt(new Vector3(0, 0, -6378137));
    }
}

Obviously, the camera looks to the North pole and it means instead of lookAt(0, 0, -6378137) the command internally modified to lookAt(0, 0, 0):

Let’s try to do it manually. Create 3D sphere and setup it with LLH coordinates (0,0,-6378137). Oops, the Height coordinate automatically changed to 0:

It makes impossible to use Cesium Georeference with Cartographic Origin to position the camera on the globe center.

It’s impossible for “CesiumOriginShift to keep the CesiumGeoreference origin near the camera” when the CesiumGeoreference is placed on the globe surface, and the camera is far away in orbit to view the entire globe.

This is false. The CesiumOriginShift moves the CesiumGeoreference origin. That’s its entire purpose.

Do your math in ECEF” means we wouldn’t use Cesium coordinate calculations but instead would need to download the ellipsoid model and topography, perform all the calculations ourselves, and use Cesium only to download topography and map tiles. Yes, that is possible, but if we perform all the coordinates processing in our code, what is the reason to load a large package like Cesium just for tile downloading when that only requires a bit of code?

I can’t understand how you got here. Sorry, but this paragraph makes no sense at all to me.

I believe that geospatial capabilities provided by Cesium for Unity are very helpful for creating a globe-oriented camera. I also believe (actually, I know, from first-hand experience) that tile selection and rendering is extremely complex. But if you disagree, no one is making you use Cesium for Unity.

I’ve tried Cesium Cartographic Origin, and it made it impossible to place the camera in the correct position to view the globe from orbit using known center coordinates like transform.LookAt(new Vector3(0, 0, -6378137));.

Your expectations seem flawed here. The globe-relative direction implied by new Vector3(0, 0, -6378137) will vary based on the CesiumGeoreference origin.

Unity’s coordinate system can never equal ECEF. For starters, ECEF is right-handed, while Unity is left-handed. For another, Unity doesn’t have the precision necessary to use a coordinate system like that.

But if you have a known position in ECEF, and you want to know where that is in Unity’s current coordinate system, cesiumGeoreference.TransformEarthCenteredEarthFixedPositionToUnity(new double3(0.0, 0.0, -6378137.0)) will do the trick! Keeping in mind that this position may be different next frame as a result of origin shifting.

Obviously, the camera looks to the North pole and it means instead of lookAt(0, 0, -6378137) the command internally modified to lookAt(0, 0, 0):

I think you may still not understand how CesiumGeoreference works. There is no modification of the position. You seem to be under the impression that the Unity coordinate system (which is the coordinate system in which your 0,0,-6378137 is specified) is somehow the same as ECEF. Not only is that not the case (by design), it can’t be the case if you want a non-jittery, globe-scale visualization in a single-precision game engine.

By the way, there’s no straightforward example for a globe view like Google Earth Pro provides. Perhaps you found it difficult to achieve due to the issues listed above? It would be helpful to give us a heads-up.

A high quality globe-oriented camera is not a simple thing. We know because we have first-hand experience creating one in CesiumJS. Unfortunately, it just hasn’t been high enough priority for us to create one in Unity yet. But there’s nothing to our knowledge that would prevent it. Cesium for Unity will help you with some of the math, but probably not all of it.

Hi @Kevin_Ring,

Thank you for providing so much information! I’ll try to follow your answers step-by-step to make sure I don’t miss any key points.

Okay, let me explain. To do the math in ECEF with input ECEF and LLH coordinates, we need to transform all coordinates to ECEF. So, we need to download the ellipsoid and the topography and perform these calculations ourselves. Part of this involves calculating the camera’s look angle for every ground pixel defined in LLH coordinates, which requires defining the pixel height based on the topography and calculating the camera’s relative position in ECEF coordinates. This is commonly done in satellite interferometry processing, for example. But I initially expected that Cesium would provide direct and inverse LLH-to-ECEF transformations for any Cesium project, including globe view.

I’m not experienced in Unity, but I don’t see why it would be more complex than in Blender or the ParaView engine.

I assumed Unity’s coordinates were just a shifted ECEF, computed from a selected reference point. It’s hard to see why there need to be three completely different coordinate systems.

Yes, you mentioned this earlier.

So, we can still use Cesium’s coordinate transformation, rather than needing to do these conversions ourselves by downloading the ellipsoid and topography?

Yes, it seems that way. The ‘Cesium for Unity Quickstart’ provides no real insight into the implementation.

To manage precision issues, we could separate the fractional part of coordinates and save it as a separate integer or a float with a multiplier. I believe this approach is fairly common and might also be usable in Cesium. But lacking detailed information, I’ve just followed the most straightforward approach.

But why? We can calculate the look angle for any selected pixel to a camera position in orbit, then apply this to consistently rotate the camera in the correct direction. This technique is commonly used for actual satellites in satellite interferometry, in Blender, and so on. For now, I’ve placed a small sphere at the globe’s center and have the camera follow it with a single command, transform.LookAt(target.transform). This works, of course, though I was surprised that the same couldn’t be done directly with coordinates alone.

To do the math in ECEF with input ECEF and LLH coordinates, we need to transform all coordinates to ECEF… this involves calculating the camera’s look angle for every ground pixel defined in LLH coordinates, which requires defining the pixel height based on the topography and calculating the camera’s relative position in ECEF coordinates.

I’m still lost. Ok you’re trying to let the user interact with the globe, right? The user clicks somewhere. You need to know where. Step 1: ask Unity to do a ray cast based on the mouse position. That will give you the position the user clicked on the globe in Unity world coordinates. Step 2: ask Cesium to convert that to ECEF, using the methods on CesiumGeoreference provided for this purpose.

But I initially expected that Cesium would provide direct and inverse LLH-to-ECEF transformations for any Cesium project, including globe view.

It does. Have I not made that clear?

Methods to transform between Unity world coordinates and ECEF are found on the CesiumGeoreference. Methods to transform between ECEF and longitude/latitude/height are found on CesiumEllipsoid.

I don’t see why it would be more complex than in Blender or the ParaView engine.

Do Blender and ParaView Engine support 3D Tiles? We’re talking about loading and rendering a hierarchical LOD structure like 3D Tiles, right?

So, we can still use Cesium’s coordinate transformation, rather than needing to do these conversions ourselves by downloading the ellipsoid and topography?

Yes? My uncertainty is because I don’t know quite what you mean by “still”. Or by “downloading the ellipsoid and topography.”

I don’t see why you couldn’t use Cesium’s coordinate transformations for what we’re discussing here, though. It’s what they’re there for.

Yes, it seems that way. The ‘Cesium for Unity Quickstart’ provides no real insight into the implementation.

True story, but the Placing Objects on the Globe tutorial, which I’ve mentioned previously in this thread, makes a solid and earnest attempt at explaining these concepts.

To manage precision issues, we could separate the fractional part of coordinates and save it as a separate integer or a float with a multiplier.

Sure, Patrick Cozzi and I wrote about a number of such techniques in chapter 5 of our book. Don’t you think those kinds of approaches would require cooperation from Unity, though?

In the absence of that, the standard approach is to use a floating origin. That’s what we do.

But why? We can calculate the look angle for any selected pixel to a camera position in orbit, then apply this to consistently rotate the camera in the correct direction.

A simple version is probably not too hard, it’s true. A high quality one requires us to get a lot of details right. You can take a look at the implementation in CesiumJS - which we still don’t consider perfect! - to get a sense of some of the complexity:

I don’t want to make excuses, or overstate it, but we know it will take a bit of work to do well, and so it hasn’t yet become a high enough priority among all the other things we can and are doing.