Nuke-perfecting the look

Nuke-perfecting the look

This report aims to document the intricate process of 3D camera matching using Nuke, which plays a crucial role in seamlessly integrating computer-generated elements with live-action footage in the realm of visual effects production. Throughout the report, essential concepts and techniques such as tracking, lens undistortion, match moving, and compositing will be explored in detail, complemented by a step-by-step walkthrough of creating 3D asset animation using Maya.

Mind Map

Here is a link to Mind Map which I created on the Mero board


No alt text provided for this image
Mind Map

Concept

As a first-time user of the software, I needed to ensure that I was able to manage it effectively, particularly as I have a lower learning process due to dyslexia. I had a specific visual goal in mind for the project, which was to create a spaceship-like object that resembles a drone. This object would fly behind a window, land on the ground, and be illuminated by lights to indicate an evening landing. The landing would occur after people had finished playing basketball and returned to their homes, allowing the occupants of the drone to disembark.

Visual Effects Step-by-step Procedure

Footage Retiming & .jpeg sequence in Nuke

To start with, the first step of the process is to work on the lens distortion. Firstly take ".mov" footage into Nuke and change the size of the footage according to the project guidelines. Take the ("retime") node by clicking Tab and connect it to my ".mov" footage to change the range of the footage. We can make changes to the original footage, but this way we can not make any changes after the output we get later. Hence decided to take from frame 1 to 400

that's why in my input range I put 0 to 400 and in my output to range 0 to 399 because it does not start from any other farm than 1. Richard explained it very well in the morning session of week 3.


No alt text provided for this image
Image1: Retime node: to change the frame range



Nex step of the process is adding the "write" node to make it a (.JPG) sequence. Change the file type to (.JPG) change the Frame to "expression", and change the file name by adding ".####.jpeg" This enables me to generate multiple files then I press "Render" and then choose the render frames.


No alt text provided for this image
Image 2: Render in.jpeg
No alt text provided for this image
Image 3 Footage Retiming & .jpeg sequence in Nuke


Lens undistortion

After creating a lens distortion profile using the lens distortion chart provided by Richard. This process Exports the profile as a file in ".exr "format. The main footage is 24fps whereas, the lens distortion footage should be in the same frame range. The reformatting process is required now in this footage, but in some cases, it doesn't require it depending on the footage. in my case, it was not required but I still added the node to play with it if I missed any step.

No alt text provided for this image
Image 4: Lens distortion footage


To correct colour imbalances, colour grading helps to correct these colour imbalances. I connected the output of the video checkerboard to the "grade" node. Then adjust the black and white colour, by pressing the control shift on the keyboard and changing the values of black and white. This will adjust the colour, brightness, contrast, and saturation.

No alt text provided for this image
Image 5: After the Color Grading

There are so many problem-solving videos and links on Canvas for students' help and I will be sharing a few here.

After applying colour grading to the footage, when we add a "sharpen" node to enhance the clarity of the image.

Next, add the "lensdistortion" node on the distortion map to apply an inverse distortion correction to the footage. By double-clicking the node we can see the properties, By going to "Analysis" and pressing "Detect" to detect the main features and adjusting the lines on the checkerboard with the pen tool "add features" if necessary. After that press "Solve". (to undistort the checkerboard) Then in LensDistortion properties change the "output" __ Mode from undistort to STmap "spatial transform map" instead of a distortion-corrected image sequence".

Why do we do that?

A STmap is a special kind of image that has information about how each pixel in one image should be moved to a new location in another image. That's why it is useful for correcting lens distortion in the video because you can use the STmap to fix the distortion without having to re-render the entire video.


No alt text provided for this image
Image 6: Lens undistortion process

After this process of LensD, we can see, the footage is going out of the boundaries. To fix the problem. we need to open the LensD properties panel and change the Bbox from (Auto to Manual) in the settings, so we can create the adjustment to make the footage symmetrical.


No alt text provided for this image
Image 7: Bbox


Mode settting (change in to STmap)
Image 8: Output changes (mode changes into STmap)


In Canvas Richerd has shown us all the steps in the 3d Match Moving topic.

After the final output hit the Tab button on the keyboard and add "Write" Node In the Write node's properties panel, select the file format you want to render .exr.

Set the file path and filename for the rendered output. Click the "Render" button to start the render process. Once the render is complete, a file that contains the lens-distortion-corrected footage is now ready for the compositing workflow or exported for use in Maya.


No alt text provided for this image
Image 7: Render Settings

STmap

After the LensD process, we will import 2 files which are (file name)(24mm_STmap.exr) and Retime footage and then add the "STmap" node. Connect it with distort and STmap. In the properties, we will change the UV channels from - "none" to "forward" to efficiently undistort footage and distort CG elements. I found a very good video to explain it very well on the youtube


So in this process, we import the .ext file and the Retime Footage which is a .jpg render sequence, take the "STmap" node and attach it to both the .ext and retime .jpg folder file. In Nuke, the STMap node has two UV channels, called "forward" and "backwards." The forward UV channel maps coordinate from the source image to the target image, while the backward UV channel maps coordinate from the target image back to the source image. By default, the STMap node is set to use the backward UV channel.

After that, we have to use the "reformat" node because the size of the file is different from before so we can create a new output format and assign that to the footage.


No alt text provided for this image
Image 8: Adding STmap

After all that, add the "retime" node for which nuke will automatically detect the frame's quantity, Finally, then we add the "Write" node to render it. This is how we finish the Fist part of the project.

3D camera Matching in NUKE

Camera tracking is used to create a 3D camera that matches the movement of the real camera, by tracking the movement of specific points in the footage and using that data to calculate the position.

So I imported undistorted footage in Nuke, resolution is 1971x1101 because we have to change the composition size to match the same resolution which we did previously. Add a "CameraTracker" node to the Nuke script. This node is use to track the movement of the camera in the footage. Then connect the undistorted footage to the CameraTracker node, We will go to the Properties settings go and change the "number of features", I added approximately 800. The node will automatically detect the features in the footage and track their movement. We will press "Retain features location" so that it can detect appropriate locations.

After that, we will go on the same node "Camratrack" property settings and change it to the camera motion to free Camera and in the Lense Distortion change it to Unknown Lense because we have already undistort the video. The focal Length will be known because we have been providing the footage by Richerd we have the PDF with all the details of Which camera has been used and at what length.


No alt text provided for this image
Image 9: Camera tracking Camera: 5D Canon Mark III, 24 fp

Then we will track the solution and then solve it by pressing Solve. Now let's come to AutoTraking. Nuke will use a combination of algorithms to automatically track the movement of the point over time. We can see in our footage of green and orange tracks, the orange one is not solved.


No alt text provided for this image
Image 10: Autotraking

We will go to Autotrack properties and select track len-min, track len -avg, track len -max and increase the Min length. This will filter out the tracker which is less than 53 frames. Then we will adjust the max error and delete rejected and then delete unsolve. This way we can have less count of solved errors.

In the week 4-afternoon class Richard explained the process.


No alt text provided for this image
Image11 Solved Autotraking with Refine solve

Now we will set the ground plane for that we will select the tracks, right-click on them with the mouse and go to the option of the ground plane. That's how we can specify the origin of the scene. After that, we have to set the scale. When integrating CG elements into live-action footage, it's important to ensure that the size and scale of the CG elements match the size and scale of the real-world objects in the scene.

After that, we create the scene in Camratraking properties, change Expot to Sean and press create, when we connect this to the viewer node we can not visualise it. For that, we need to render it. This will allow us to create 3D scenes and integrate them seamlessly with 2D footage and graphics. For that, we will add a "ScanlineRender" note.

This will have 3 outputs, (objects, bg, camera) "Object" connects to scene "bg" connect and "camera" connects to the camera. This will allow us to see the 3D integration.


No alt text provided for this image
Image12: Distance, ScanlineRender and CameraTrackerPointcolud node


Time to test the solution for that we will add a "Cube" node, press the play button and see if it's working right with the camera.

In the footage, my main target was to track the window because I want the drone to come from behind. Double-click the "cameratracker" node, select the trackers, then right-click on it and create a Card node. To view the card we will connect it to the scene node and ScanlineRender to the viewer node. Then in the Card Properties uniform Scale after that, we create the 4-axis nods by right-clicking on each axis and connecting it to the scene node. Then we can connect the card node to the Checkerboard and change the direction and scale.


No alt text provided for this image
Image13: added the axis and made a plan

This window plane will help us later in Maya to figure out the space. In the end, we will add a "writeGeo" node to the "Scene" node and render the file by saving it and pressing Execute to make it useful for Maya.

Here is a link to a very good learning youtube video which has been shared by Richerd on Canvas.


3D Animation in MAYA

After completing the 3D camera matching process in Nuke, the next step is to integrate CG elements into the scene. So we will import our .fbx file and organise it in Maya. We have 4 exes and point clouds. To view the camera we will go to perspective and click on Camera.

Now we need to bring the undistorted footage for that we will go to Camera___CameraShape____Enviorment ____Press Create then we will load (Image name) and go to the Footage folder (name of the file) (Graffiti) we will pick the first image only and then in Maya we will click on (use as a sequence)


No alt text provided for this image
Image 14: Maya workflow camera and plain setting


When we played the footage, we could see that it was so close to the camera that it was clipping all the points. To solve this problem, we will go to the perspective camera. We will go to ImagePlanShape___then Placement___and we will change the values in Depth.


No alt text provided for this image
Image 15: Change the camera


Now we have to create a ground where the drone is going to land. We should check the resolution of the footage.

We need to check that the Drone (asset) scale is matching with our footage, for that. In the "Render Settings" window, set the resolution to match the resolution of your Nuke footage. 1971x1101.

Now we will animate the asset and set it accordingly. Started with drone fans, this part was difficult. It took me time to understand the process.

Separated the fans of the drone one by one in Maya by right-clicking and (extract Face) ___ then right-clicking ( combine ) how we have to align it so the orientation of the fan does not get disturbed.


No alt text provided for this image
Image: 16 Drone animation settings


After animating the fan, set the curve to the post-infinity cycle with offset.?Then we will put the substance material to the asset and bring this to the footage.


No alt text provided for this image
Image: 17 Substance Painter Material


No alt text provided for this image
Image: 18 Substance Material output


Masking

Now the important thing is, to create the mask for the object, this will allow the asset to land properly. By creating a window plane to mask the window space and show it as if the asset is coming from behind. Extrude it as you can see in image 19 and then delete the side faces.


No alt text provided for this image
Image 19: Masking

Then give this plan a new material, Which is Arnold (aiShadow Matte) this will help to mask drown by itself, one more thing we have to add the same material to the ground as well. In Image 18 you can see the shadow on the ground this is because we have added this material.


HDRI

The next step is to create an HDRI, for that Richerd has already provided the files. It's a very easy process. Also look into this useful video.


No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
Image 20: Set HDRI in skydome file to scene-linear Rec.709-sRGB


In the Maya file now we will grab the Skydome light in settings, we will go to Color (press the button next to Color) and add then go to (Image Name) and grab the HDRI file, which we have created in Photoshop. Then we will match the rotation of the HDRI with your scene as had worked on this in week 7 in the class.

After this process, animate the drone in the scene, Using simple keyframes. By giving it a look that it's coming from the window and landing on the ground, the fan has been animated on its X-axis and set the fans to be ( post infinity) option and select (cycle).

I Faced a big challenge to animate the fans and then work on the render channels because every time I tried to see the render passes there was something going on. It took me 5 times to render it again and again and a lot of time has been wasted due to the reader settings solution. But finally, I solved it with the help of my professor and a few combined studies with classmates.


No alt text provided for this image
Image 21: Animation settings of Drone


Rendering the layers

When working with CGI in Nuke, it's important to render out the different layers of your 3D asset separately in Maya, so that you can easily composite them in Nuke. We are creating 2 layers to separate the shadows from the assets. This way we can make changes of any kind when we will take it to nuke.

This was difficult for me to understand even after watching the videos by Richerd in the lectures.

Collection Drone: To achieve this first, we will create the first layer for the drone (asset) then right-click and create a new collection (name it Drone) in this created layer we dragged our (drone_Motion1) and (window_Mask_Maskout) layer groups. This will be the geometry which will show in the nuke. If we will not add the window in the same layer it's not going to render the window mask in the render so it's important to add both in one layer. After that, we will add another collection to the ground.

Collection Ground: We assigned a new material for the ground to bounce back light ( the ground I created has a shadow mat material) just to override it in this layer only, I set an override, right-clicked on the Render collection and Create a shader override and whenever we do this it copies the collection, then we will bring a new shader and by middle click, we can drag and drop it.


No alt text provided for this image
Image 22: Image 21: 1- Rendering Setup & Arnold Render Settings


Primary Visibility: Primary Visibility is an attribute that determines whether an object is visible in the render or not. If Primary Visibility is turned off for an object, it will not be visible in any of the rendered images, regardless of any other attributes or settings. because we don't wanna show this plan in the render so we will add another override which will do the primary visibility override for us will go to ( ground shape) __ Arnold____Visiblity ____Primary Visibility (right click) __Creat Absoluit Override for Visible layer and then check to uncheck


No alt text provided for this image
Image 23: The primary Visibility and AOVs

In the week 6 Afternoon class, Richers explained at 2:15 it.

Layer_Drone Shadow: In this, we have Drone Shape in this we also need to hide the drone because we only need to see the shadow. So we will repeat the process and go to our done then rs_DroneShodow __then open the folder and click any one of it __ Arnold____Visiblity ____Primary Visibility (right click) __Creat Absoluit Override

AOV's :

By rendering individual components of a scene as separate AOVs, you have more control over the final image in post-production. For example, you can adjust the amount of specular highlights or reflections, or add or remove shadows, without having to re-render the entire scene.

I found a really useful youtube tutorial which helped me to understand why we do that and how it helps us in a unique way to separate the layers.


Diffuse Direct: This AOV can be used to adjust the intensity and colour of the direct lighting contribution without affecting the overall lighting of the scene.

Diffuse Indirect: The AOV captures Indirect diffuse when the direct light bounce.

Specular direct: The AOV captures the direct specular reflection of a light source on a surface.

Specular Indirect The AOV captures indirect specular reflections coming from another object or an environment map.

AIMV: This captures the motion vectors of objects in a scene.

To create it we will go to your master layer in Render Setup and add the motion vector in the render setting (master layer) name it, for example (my motion vector) it will then show up on the AOVs. Then right-click on it and select AOV. Now we need to go to node edited and add a node ( aimotionvector) then go back to my motion vector right click and add (select on AOV) then drag and drop the node to the (shader).

This was a useful online youtube video.


No alt text provided for this image
Image 24: AIMV Motion vector

Z: The Z-depth represents the distance between the camera and the objects in the scene. It is used for effects such as depth-of-field blur, fog, and depth-based compositing.

We will go to Render settings and get these render layers by clicking it in the AOV browser menu, and adding the above AOVs. Then we will right-click created AOV manure and add it to the render layers to override.

Render

The final thing to be done is render. We will open the render setting and in the tab (common)

File output__we set Arnold to merge AOVs and image formation to .exr

Metadata in Frame/Animation We will change it to name.#.ext which will be the name of the project and the number of frames with an extension which is .ext. and add the frame range mine was 0 to 399. Width and height 1971x1101 which is my footage size.

In Render layers, we will make sure that the master layer is turned off.

No alt text provided for this image
Image 25: Render settings to create files

Post-production in NUKE

Now we will start work on Nuke. The first step is to bring the .exr file folders to Nuke which we can find in our Maya seen ___ images (Done_Shadows) (Drone 1)

We will Grab the node Drone1 and Unpremult it first why do we do that, the "Unpremult" node can be used to separate the colour and alpha channels of an image, allowing for independent adjustments.

After that, we will bring the AOVs which we created in Maya earlier. For that, we will add a (4 Shuffle) node. To add Diffuse Direct, Diffuse Direct, Specular Direct, and Specular Indirect. Then we will add the "Merge" node which will help us to have greater control over the final look of the rendered image.


No alt text provided for this image
Image 26: Post-production in NUKE


Now it's easy for us to control it as we want and for that we can add a "grade" node and then with settings we can change the colours and light through it.

Now we need to duplicate the results of the main footage and add the fully controlled nods we have created and for that, we will add Copy channel, which will duplicate an input image and send it to multiple nodes.

This will allow us to change RGB Alpha, A to B. After that, we will again add "premulti" node to have full control over what we have done so far. As I have thought that I need to make it a little blurred because when you see the footage and things are moving from the camera it never stays focused all the time. For that, we have to add the "Zdefouse" node. To set it correct we should go to the Properties ___ Depth Channel and change it to depth.z and change the math to __depth ___Output___ result

Then we will use the Vectorblure node and in properties we will change the UV channel settings to AMIV. Because Arnold's motion vector passes the motion vector information is stored in the "AIMV" channel.

Then we add the "grain" node because every normal footage has grains on it, so to make it more realistic we will add this node.

After that, we will add a "Merge" node because this way we will have one layer of all the above functions we have performed and then we can add shadows files. That's how we end up merging the drone and shadow on the ground.

No alt text provided for this image
Image 27:Post-production in NUKE
No alt text provided for this image
Image 29: drone and shadow on the ground

This part was tough for me to understand so I have seen serval videos to understand it.

As we have undistort the footage earlier in Nuke. But we have created the camera in Maya for your drone and this drone footage we have created is a 3d camera that is perfect because it has been rendered in perfect 3d camera solutions. Now we need the Stmap node on the 24mm footage which we granted earlier in Nuke. Then change the UV channel settings to backwards instead of forward, this will do the opposite effect instead of undistort which will add distortion to it.

Then we will add the "Merge" node with the retime footage. After that, add the "colour grade" node and change select the Black and White points. After the primary colour grading, then added one more colour grading node to make the footage dark the way I wanted it to be.

No alt text provided for this image
Image 30: drone and shadow and colour correction


Adding light to the drone will give it a very realistic look, for that, select certain parts individually.

This link is an interesting way of understanding the chromakey node.


For that, took the main "defuse direct" node which we created earlier, and put the "chromakeyer" node this key will track certain colours. Select the picker and select a certain area where we need to add the colour change.

After that, the "invert" node was added to select the area where the colour change should be applied.

Next, the "grade" node has been used to change or enhance the colour and the "Blend" node was used to control the opacity of the effect. Then added the "Glow" node to make it light.

There was an issue where the "Glow" effect was appearing on the entire scene instead of just the asset. To fix this, a mask was added using the "Roto" node with the pentool to select the area where the glow should not appear.

A tracker was created for the window's lower corner to track the mask's movement, which needed to follow the window. To link the mask to the tracker, we went to the "Roto" node properties settings and selected "No animation on all nodes" by right-clicking on "transform". This unlocked the "translate" settings, which were then connected to the "glow" node.


No alt text provided for this image
Image 31: Glow

One crucial part that remained was making the glow light flicker, which posed a problem. To find a solution, I searched online and found a helpful link. Using the formula I found there, I created a "Blend" node and went to its properties channels, where I right-clicked and pasted the formula. I was pleasantly surprised to see that the "glow" light was now flickering, which was a fantastic result.

No alt text provided for this image
Image 32: Glow and Flicker

Brake down

The first step is to take the original footage and create the "write" node in Nuke and step by step place it to the important nodes by pressing D we can hide and show the effect we made by the specific node. The important nodes which made the big difference in the footage are etc. Original Footage, Diffuse Direct, Diffuse Indirect, Specular Direct, Specular Indiret, Grains, Motion Blur, Shadow, Shadoe blend, Undistort, Colour Correction, Colour Grading, Motion Blur, Glow, with the help of each node conned with the "white" node I rendered it as .jpg by going into the preference. When I'm writing this blog I realise that I forgot the take screenshots of that part.

  • Now in Effected Effects, we will set the composition, and Frames Rate to 24, Preset HD 1080. Now we will click on the Comp and grab all the. Jpg files into it. Now we will drag the files you drag onto the timeline, whichever layer is on top is going to be visible. For that, we need to create a mask so we can reveal the underneath layer. And we want to animate it to reveal. So if I will use the P key this will give a stopwatch sign where I can set the position where to start the animation from. Now we need to set the key where it's gonna stop so approximately I have it a time of 2 sec. I dragged the frame where I need it and hit the diamond shape small button on the left side of the timeline. I repeated this process till all the layers I wanted to create. Added sound layers on it. As I am a musician and I have been using similar software I some home enjoyed using this one. At least I Rendeted the sequence and finally finished my assignment. It was a lot of work with a lot of mistakes and retakes it was not easy.

No alt text provided for this image
Breakdown

Sounds have been taken from this free sound effects place

Thank you for reading
Shreyash Pant

3D Animator | Mocap | CGI

1 年

Wow! I really like your work. Well done

Omar Saleem

Digital Artist/Creative Strategist

1 年

V nice. I would love to see its final render (animation).

回复

要查看或添加评论,请登录

Rahab Munir的更多文章

社区洞察

其他会员也浏览了