LUTs: They Can’t Create, Only Modify.
Maybe Just Shuffle Some Values Around
LUTs (Look-Up Tables) have become an essential tool. They help display compressed dynamic range footage in formats like Rec. 709, allow for precise display calibration, and are widely used to develop looks or presets. In restoration, we use them to correctly visualize Cineon log-encoded material. As LUTs have gained popularity, they've moved from purely technical applications to creative ones, shared and sold among filmmakers and content creators alike. However, this widespread adoption has also led to misconceptions, particularly in color recovery and restoration. One of the most common is the belief that a LUT can magically restore the original look of faded footage—just apply the "right" LUT, and your faded Eastman color prints will return to full color. This is far from the truth. While LUTs can improve channel balance in faded prints, they are inherently limited by their mathematical nature.
For instance, let me show you an example of a trailer I found on the Internet Archive. It’s from the movie Ben and appears to be an Eastman color print, and as you can clearly see, it’s quite faded.
Now, obviously, these trailers weren't correctly digitized. As I always say, pink or magenta should never make it to the restoration process. That color imbalance needs to be corrected in digitization, even if it doesn’t look good. Ensuring that all the channels have enough information is crucial for later processes.
For this case, let's balance the image ourselves.
Looking better, but the shadows are still a bit contaminated by blue, and the skin tones seem to have a magenta tint. It’s not perfect, but it’s an improvement.
I’ve created a color reference in Photoshop for this using the new neural filters, so the color reference should look something like this:
Now let's create a LUT in NukeX and apply it to the color-balanced footage to see if it works as intended:
At first glance, it looks okay, but if you notice, many of the hues created during the Photoshop phase are missing. The blue jacket on the man with the white helmet is now brown, and the red leather from the interviewer has lost saturation, also turning brown. In fact, a lot of the hues in the picture have shifted to a reddish-brown tone. But why is this happening? Why is the LUT so out of place?
The Core Limitation of LUTS:
A LUT, at its core, is a matrix—a tool that transforms input color values from your source footage to produce a desired look. 1D LUTs typically work by multiplying the input values by a fixed set of values to achieve specific adjustments, effectively remapping each channel independently. For example, a 1D LUT might adjust the red channel by multiplying each value by 1.2, brightening the reds in the image.
On the other hand, 3D LUTs operate in a three-dimensional color space, where they map input colors to output colors using more complex interpolations. This allows for nuanced color adjustments, such as shifting a specific shade of blue to a different hue without affecting other colors. However, regardless of whether they are 1D or 3D, the primary function of LUTs is to modify existing color information rather than create new data. This limitation becomes evident if the original color information (for instance, certain hues in the RGB channels) is missing or faded—no amount of transformation can bring it back. As the saying goes, you can’t create something out of nothing.
One way to visualize this limitation is by applying a LUT to a black-and-white image. The LUT tries to steer identical RGB values in different directions, and the result is clearly flawed, exposing the inherent constraints of this method.
What about Masks and Selections?
Beyond LUTs, there are color grading tools like masks and selections, or even magic masks (I’m looking at you, BMD) that can be useful for small, targeted fixes within a scene. These tools allow colorists to isolate certain areas of the image for specific adjustments. However, using them for an entire scene requires a level of precision that often demands rotoscoping, which is itself a time-consuming, labor-intensive, and expensive procedure. While promising technologies like auto-generated crypto mattes may simplify this workflow in the future, current results are often subpar for large-scale fixes. It’s not yet a viable solution for an entire reel, let alone a full movie.
The Role of Machine Learning in Color Restoration
Unlike LUTs, which can only modify existing data, machine learning models can be trained with a reference to create an inference of the remaining color information and interpolate missing details, offering a more precise and flexible approach to restoration. These models can be as specific as necessary—project, reel, or shot-based, depending on the needs of the project. Here’s an example using machine learning alongside other methods.
So, is there a place for LUTs in this restoration world?
And Is using LUTs for color recovery flawed? In some cases, yes. But they can still be useful for simpler recoveries where not all three RGB channels are degraded. The challenge arises when a scene has a rich palette of colors—this is where LUTs alone won’t be sufficient. In my upcoming AMIA talk and workshop in December, I will dive deeper into workflows that combine LUTs with more advanced techniques, including the machine learning workflow I mentioned.