Expose all camera properties of WanVideo Uni3C Embeds
Hi Kijai,
I saw you started working on it but there is no update since longer time.
There are 7 parameters to control the camera freely.
Rotation parameters:
d_r: Distance from the camera to the foreground center, default is 1.0, range 0.25 to 2.5.
d_theta: Rotated elevation degrees, <0 up, >0 down, range -90 to 30.
d_phi: Rotated azimuth degrees, <0 right, >0 left, supports 360 degrees; range -360 to 360.
Offset parameters:
x_offset: Horizontal translation, <0 left, >0 right, range -0.5 to 0.5; depends on depth.
y_offset: Vertical translation, <0 up, >0 down, range -0.5 to 0.5; depends on depth.
z_offset: Forward and backward translation, <0 back, >0 forward, range -0.5 to 0.5 is ok; depends on depth.
Intrinsic parameters:
focal_length: Focal length, range 0.25 to 2.5; changing focal length zooms in and out.
We also support traj_type to define camera trajectories:
"custom", "free1", "free2", "free3", "free4", "free5", "swing1", "swing2", "orbit". "custom" is used to control along a custom trajectory with parameters mentioned above, while others support for pre-defined camera trajectories.
Please add this. This is much needed.
? :)
Those inputs are not for the model, but for the preprocessor rendering the input video.
The model does also accept the trajectories alongside the video, as far as I know they won't work alone and the video is the main driver. Using just the video on it's own has proven to work so well that I haven't felt the need to even add the trajectories.
Hi Kijai,
Thanks for clarifying, but just to point this out more precisely:
Uni3C does make use of the full set of camera parameters, not directly inside the model, but in the preprocessor that generates disparity / synthetic views. That’s the stage where depth and parallax are created.
If the node only forwards plain video + a simple traj_type, most of that control is lost and you’re stuck with defaults. But if the node exposes and passes all seven camera parameters –
d_r, d_theta, d_phi
x_offset, y_offset, z_offset
focal_length
traj_type (“custom”, “free1…”, “swing”, “orbit”),
then Uni3C will actually respect those values during the disparity render.
So the model itself doesn’t “consume” the raw numbers, but it absolutely depends on the preprocessor outputs that are driven by them. That’s why people (me included) need access to all of them in the node – otherwise we can’t define proper custom trajectories and depth-aware camera motion.
Would you be able to expose these inputs in the node and forward them to the Uni3C preprocessor? That would solve the issue and make the camera embeds much more useful.
Hi Kijai,
Thanks for clarifying, but just to point this out more precisely:
Uni3C does make use of the full set of camera parameters, not directly inside the model, but in the preprocessor that generates disparity / synthetic views. That’s the stage where depth and parallax are created.If the node only forwards plain video + a simple traj_type, most of that control is lost and you’re stuck with defaults. But if the node exposes and passes all seven camera parameters –
d_r, d_theta, d_phi
x_offset, y_offset, z_offset
focal_length
traj_type (“custom”, “free1…”, “swing”, “orbit”),
then Uni3C will actually respect those values during the disparity render.
So the model itself doesn’t “consume” the raw numbers, but it absolutely depends on the preprocessor outputs that are driven by them. That’s why people (me included) need access to all of them in the node – otherwise we can’t define proper custom trajectories and depth-aware camera motion.
Would you be able to expose these inputs in the node and forward them to the Uni3C preprocessor? That would solve the issue and make the camera embeds much more useful.
There's nothing to expose as there's no Uni3C preprocessor node at all, it would have to be created from scratch. View crafter is what it's based on and it does have this node set, albeit old it may still work: