Hello!
Great to use Cesium in combination with the Google Maps Api.
We are rendering a VR animation where the path is a circle - and we are looking at the center of the circle. (so we can loop the animation but still look around in VR)
Now we are loading in the Google Maps Tiles but I noticed that only the area we are looking at, renders in higher resolutions tiles. Great for a normal camera-shot but not for a VR shot where the rest is still of a less higher quality. I’ve tried to turn up some settings but not succeeding a lot.
Do you have any thoughts on how to get the best level of detail for a VR animation?
Can you share what settings you’ve already tried to adjust? We usually suggest decreasing the Maximum Screen Space Error so that higher detail tiles can be loaded and rendered. If this causes more tiles than necessary to be loaded, you can use a Cartographic Polygon to exclude tiles outside of intended animation range. Also, be sure to disable Frustum Culling so that out-of-view tiles will still try to load at the desired level of detail. (You’ll still want Fog Culling to be enabled.)
Just to confirm, this is supposed to be an interactive experience, and not just recorded footage?
Thanks for the quick response!
Would like to work with you to get the best resolution for VR.
At first i used the default 4.0 screen space error. I tried to quickly put it to 2.0 now but i didn’t see any difference when rendering some of the vr images again. Perhaps i need to change some other settings too.
For you this done for recorded footage. Perhaps later on it could be an interactive experience but i like to go for the best resolution of the tiles around our architecture project.
I’ve added two screenshots with some of my settings.
How are you rendering the video? If you use Unreal’s Sequencer, it should automatically wait for all the tiles in view to load before snapping the frame. But if you’re using another mechanism to record, you won’t benefit from that.
We are using Unreal’s sequencer indeed.
These are the basic settings we use.
Can you reproduce my problem or does a panoramic sequence work on your end?
It’s a bit milder in your case because you’re not trying to use a tall aspect ratio. But fundamentally, I think Cesium is not able to obtain the correct camera parameters from the Movie Render Queue and that’s why your rendering is not what it should be.
We don’t have a great solution for that issue, but we do have a workaround in the form of the CesiumCameraManager. See the PR that added for more information, including a bit about applying to to the Movie Render Queue specifically:
So if i understand correctly, when I add that blueprint to my level, there is a custom camera that sweeps 360 degrees each frame and I can reference that in my CesiumCameraManager so it gives out information to the sequencer every frame?
I’m a bit new so I might misunderstood the way to fix it
I’ll try to create the blueprint and test it out again!
The sweeping camera is just an example of the API. You don’t need a sweeping camera.
What you do need is a camera that mirrors the camera that is used to render your view. Same location, same orientation, same width and height, same field of view. You need to create a blueprint that gets those parameters from the real camera you’re rendering with every frame (or hardcode the ones that don’t change), and calls Update Camera with them. This should probably happen every frame, so attach it to the Tick event in your level blueprint.
is this correct as a start?
I seem to not be able to find some of the nodes as in your reference, to get the the camera viewport size for example or location.
Getting the render viewport size from the Movie Render Queue is… basically impossible, that’s why Cesium for Unreal doesn’t do it automatically (the “Frustrum culling does not account for different aspect ratios” issue I linked way up has the gory details). Best bet is to hardcode it, unfortunately. Location should be straightforward enough with the Get Actor Location node.
I wish i could follow what you are saying but i’m noticing that i’m missing a lot of Blueprint knowledge to get it working. I’m not able get get further then this currently.
@Yangxuanzong, I don’t think it works for me with my current settings. I’ve tried to render my sequence with the unchecked “Use LODZero”, but i don’t see a lot of difference as of yet. This is the outcome:
Yesterday I am test 360vr with Cesium terrain and Cesium 3dtiles. Before I have LOD overlapping problem, after use uncheck this option,it works. So I think your problem is having different LOD on google map.
Another one is - I didn’t use panoramic rendering option, I use another 360vr camera plugin.
There’s no real trick to this. I just mean that when you’re setting the ViewportSize property of the CesiumCamera (as well as other properties like FieldOfViewDegrees, set constant values that match the width and height of your render. Rather than trying to get them automatically or something.
For the Location and Rotation properties, you should be able to just drag your camera object onto the Blueprint canvas, drag out Location and Rotation nodes from it, and connect them to the corresponding properties on the CesiumCamera.