Awakening - Unleashing the Spirit of War: A Workflow Report on Nuke and Maya Production Pipeline
Introduction
The tale of this project's growth focuses on the concept, delivery, and enhancement of a visual story that reflects the return of the General of the Undead Evil. The journey through the Nuke and Maya Production Pipeline is summarised in this report, which covers every phase from setup to finishing touches. It includes ideation, investigation, interaction with instructional materials, production stages, feedback assimilation, and introspection.
Inspiration and Guiding Concept
The inspiration for this project stems from the Neill Blomkamp and OATS studio Netflix series episode 1, Firebase's fascination with historical narratives, and the comparison of the past and present. The guiding concept revolves around the notion of the past haunting the present, as the malevolent spirit of war reawakens, seeking dominion over the world. This concept drives the visual narrative, guiding the selection of assets, techniques, and narrative elements.
Independent Research and Incorporation
I watched the episode and looked at other works of the Neill Blomkamp era and depictions of evil spirits in various cultures and mythologies. This research informed the visual aesthetic, character design, and thematic elements incorporated into the project.
My Project would introduce different elements and periods while keeping the base foundation of the episode's story.
Interaction with Instructional Materials
Canvas was the major source for learning technical materials, it provided deep and quite easy explanations for the techniques I used throughout the process. Tutorials on camera tracking, match move data integration, Nuke, and Maya workflows, and compositing techniques made it feasible to gain experience with the tools and techniques required for the project.
Stages of Production and Workflow
The stages of production and workflow were quite linear. Camera Tracking, Solving, Retiming and Reformating, Lens distortion, ST map, Color grading, and Color correction were done in Nuke. The environment setup, assets, lighting, and render layers were created in Maya
Workflow Overview:
Setting up the Files Directory and Import the Footage
I Started by importing the footage into Nuke using the Read node. I ensured the proper organization of files within the project directory. I made three folders under one main folder to keep the files arranged and easy to access. Also, I made sure to set the files directory.
Retime and Rewrite:
I utilized the Retime node for retiming footage to make the desired timeline for the shot and adjusted the timing of specific frames. Later, I applied the Reformat node to match the desired output format and resolution which was HD 1080p at 24fps.
Lens Distortion Correction and ST map:
I used the LensDistortion node to correct any distortions in the footage. The LensDistortion is a node that basically bends and distorts the image differently, making the image look different than it originally appeared. In the analysis, I detected the grid and adjusted the points to make it properly aligned and spread out for the lens distortion and solved it with solve error close to 0.6.
For creating the ST map, I created a write node and rendered the lens distortion data in terms of a sequence that we can use later for our footage. The STmap is crucial because it allows us to ensure that distortion is applied accurately and consistently across the entire footage.
Problems faced and solution -
While solving the lens distortion, it would always go off from the frame, no matter what I did. I watched the tutorials on Canvas, i watched Foundry's tutorial, and Hugo's desk tutorial to get the idea and the solution i was overlapping the lines while correcting them and that added keys to the solver. I deleted the keys and redid the process and it worked.
Also, this was my second attempt at creating the VFX shot because the first time I had missed rendering out the STmap which then resulted in my scene not being tracked.
Camera Tracking:
I tracked and examined the movement in the video using a Cameratracker node. The video initially had some issues with jerky motions and inaccurate tracking points. However,?I managed to get beyond these problems with the help of a Cameratracker node by increasing the number of features and adjusting the distance threshold. I had to fiddle with the features and threshold a lot to get the right amount of detectors for the footage.
Problems faced and solution:
During this time, I had some problems with the amount of features ( locators ) that were suitable for tracking. Sometimes, I would get the locators on certain that I wanted but after I added the trackers, I was able to get more features with a certain amount. While, after solving some of them were dead locators and some were not even detected I got the desired result that I wanted.
I also used tracker nodes, which function similarly to digital breadcrumbs, to indicate the locations of individuals or items in the video. This made it possible for me to precisely track each subject's location and movement throughout the video. I used around 12 trackers for the footage. To get the tracking right that is to get the whole tracking line green, I had to track the footage with trackers at certain frames to get it whole green.
After adding the trackers, I solved the footage and in the auto tracks, I adjusted the minimum length, Max track error, and max error to the point with what I was happy with.
Further, I added a card and grid to align with the footage's surface to get the tracking of the object right.
Scanline Rendering:
To export the nuke scene as an FBX file, I used the scanline render node to merge the background, the cube, and the scene in the scanline node. The scanline node comes with a scene node, to which I attached my camera. Also, I attached my camera tracker node to the camera because I wanted to see camera info on the scene. The scene node acts as a glue or a catalyst for the camera tracker and the point cloud data which comes together with the background and object for the scanline render node. After that, we can add it to the write geo node to export our scene as an Fbx file.
MAYA WORKFLOW:
Setting up -
After exporting the scene from Nuke, I imported the scene in Maya and started setting it up. The first thing I did was group all the locator together and scale them up to increase their visibility, putting it as a layer in the channel box.
Then, I renamed the outliner by grouping the shot camera, ground plane, and locators under one group.
领英推荐
Also, with the camera selected, I imported the undistorted footage as a sequence and changed the camera settings to fit the resolution gate.
Problems faced and solution
For some reason, my cache kept running low due to which my timeline wouldn't stretch beyond 160 frames and I had 300 frames in total. I tried increasing the frame cache and increasing the GPU memory threshold but it didn't work. So, after searching on Google and going through the Reddit forms I found a solution which was to increase my memory in the cached playback and it worked.
HDRI:
For HDRI, we were given different pictures of the footage surroundings which when merged would give a complete HDRI. But, I wanted to change the sky's color as well as change the color a bit overall. So, I took the images and formatted them as an HDRI in Photoshop and then masked out the sky to change its color. I wanted the sky as brownish-red because the scene is sort of about the uprising of an undead army. Later on, I exported the new HDRI as an EXR file for skydome light in Maya.
IMPORTING THE ASSETS:
After adding the HDRI to the skydome light, I proceeded to import the assets and set up the scene. The assets used in the scene are selected particularly keeping in mind the vision that I had of creating something similar to Neill Blomkamp's Oats Studios firebase episode.
Also, I referenced the assets in the reference editor. Then, I went with the ancient character from Sketchfab for the main character who was the center of the whole composition. Fortunately, the model had the same animation that I wanted for my shot.
For the environment. I wanted to showcase an uprising of the army with banners and pillars that depict the overall appeal of the environment. The environment assets are also sort of medieval which is also a part of the vision.
For the crowd, I settled with the undead zombie from Mixamo. I created a whole crowd from it to depict the undead army. I did not add any animations as I wanted the army to be static.
Creating Z Depth Planes:
I created Z-depth planes to create depth-based effects while compositing providing additional depth and enhancing the sense of view within the scene.
AOV'S SETUP:
To render layers, I first finished by setting up the AOV'S for the shot.
I added some custom AOV'S such as Motion, AO, Shadow matte, and Depth for which I had to override the shader. Other than the following, I added the diffuse direct, diffuse indirect, specular direct, specular indirect, crypto asset, crypto object, crypto material, and emission.
RENDER LAYERS SETUP:
For the render layers, I made three separate layers which were for the Main Character, the structures ( props ), and the crowd.Each layer had the shadow matte material overridden on the ground plane for the shadow.
This was my second attempt so I didn't want to spend a lot of my time on renders as it was taking me days to render different render layers. Therefore, I set three layers with the shadows and A0V'S merged when rendering out except for the Evil_Main because it was already rendered out until I learned about the merged AOVs.Then, I rendered out the render layers for the composition.
NUKE WORKFLOW:
IMPORTING THE FOOTAGE AND COLOR GRADING
I started by importing the footage and got along with color grading. I added a keyer for masking out the sky and then a color-grade node for grading the sky and making it more prominent. Then, I added a Premult node to merge it over with the footage. Then again, I added a keyer node to mix out the grading with keyer 1.
Further, I added the ST map to which I added the ST map and the lens distortion as a source with the reformat node with shuffle node for which I had to change the input layer depth to the output layer depth.
Later on, I added the Zdefocus node to add some level of depth of field in the shot which I later decided not to use.
IMPORTING THE FOOTAGE AND COLOR GRADING:
ADDING THE RENDER LAYERS
I copied the whole grading node and the ST map to merge the render layers for the composition.
I started adding the render layers to get an idea of how the composition would be laid down.
I added the different render layers with shuffle nodes with their respective input and output layers. I merged down the render layers with merge plus nodes to merge them with other render layers for the composition.
The portal was a last-minute detail that I added for the main character to give it more depth and something that stands out.
The images above sort of explain the breakdown of the nodes that I used for the composition.
Also, I added a lot of color grades and color correction nodes for the look and feel of the composition.
Later on, I added a write node to render out the sequence with the composited video.
CHALLENGES AND SETBACKS:
The main challenge for me was the objects not tracking with the footage after rendering from Maya. This was a major setback as I had to redo the assignment because I had forgotten to render the ST map information for the Lens Distortion and the second challenge was rendering out the render layers as they took a lot of time. The cache memory was also a problem but I was to fix it by increasing cached playback.
The approach worked okay for the first time but the second time, after watching more tutorials and having more exposure to the things were. It became much easier knowing what to do and where to do it. Doing the project second forced me to learn more about the topics for the production. I re-watched the canvas tutorials and was introduced to Hugo's desk and Foundry which further introduced me to more knowledge. Going a bit slow following the instructions more carefully and also learning about them before starting.
My final product was successful and it could been better with adding assets and lighting up the scene a bit. Also, adding a volumetric fog and maybe some fire.
The lessons taught have prepared me well for the next project that I will tackle. I am quite confident about it.
THE BREAKDOWN VIDEO:
For breaking down the video, I added a write node and started rendering out the render layers one by one to make different sequences for it such as the normal footage, graded footage, AO, the render layers, portal, graded render layers, and crypto material.
Conclusion:
This was my first time making a VFX shot, which was not easy to come at first but with my second attempt, I think I did alright. The materials and guiding support provided by our mentor were very helpful and easy to understand. This project expanded my view on different methods of making a shot and compositing it in software like Nuke. The workflow taught me the importance of building file directories and renaming files more than ever. Also, I picked up a few new tricks in Maya which I know will help me out a lot in the future. Overall, it was a good experience getting to know this much about the compositing for a VFX shot.