Report on the FilmLight Workshop, 11 and 12 November at Camerimage 2019
By Thierry Beaumel for the AFCFirst Seminar
Daniele begun with the more “theoretical” part, involving human perception and its constant adaptations.
The seminar began with the display of a still image of a face. The color grading of this image changed over the course of a few minutes, and the viewers were asked to say what parameters they believed had been modified. The audience was able to identify changes in contrast and saturation during the screening, and everyone thought that the changes were rather subtle.
Daniele then compared the key points of his dynamic color grading in side-by-side cuts : everyone was shocked to see how significant the differences were in contrast (very altered blacks and distorted luminance) and in saturation (almost black and white) but, even more surprisingly, he distorted the shape of the face to make the two eyes closer together, like on a caricature drawing. Nobody in the audience had noticed the change ! This proved the unreliability of our ability to perceive these changes dynamically because the brain is constantly reconstituting what we think we are looking at. Therefore, it’s questionable whether it’s worth spending too much time on a single shot in grading !
doute sur l’intérêt de rester trop longtemps sur un seul plan en étalonnage !
Other visual experiments followed, and they demonstrated the constant adaptations to our sight (and the unreliability of our sight over time). Looking at a negative image for a few seconds after a cut to a completely white image, we still see the positive image linger on for a few seconds ! Similarly, color can be added to a black-and-white image after only having seen the chrominance of that image !
Another experiment showed us that if blur is applied only to the chrominance part of an image (up to 60%), the image still appears just as sharp to us, while 5% of blur on the luminance immediately makes the image blurry. Similarly, the environment greatly influences our feeling of neutrality (we lose sensitivity to a color when it is dominant in our environment), without even mentioning the change in our eye’s colorimetry, which becomes warmer as we age.
A few direct conclusions about our work drawn from a great deal of information : our brain is always seeking to bring what it sees in line with what it knows. The sensation of a color or a luminosity can be reinforced by preparing the audience with a complementary color or an inverse luminosity. We only perceive rapid changes. We musn’t be afraid of a high level of luminosity, because our brain naturally adapts. A reference that is regularly shown “resets the counters to zero”. Because of that adaptation, we are unable to compare an SDR image with an HDR image at the same time (always take a break when shifting from one mode to the next).
Andy spoke next about the “development of digital images” in two parts. What is the zero position of our color grading settings (a zero fader position) followed by a few proposals for best practices for developing a specific look and for grading.
He showed us side-by-side the same RAW file developed three different ways : with the camera manufacturer’s software, with ACES, and with FilmLight. The three images are different, they are all fine on their own, not better or worse, just slightly different. It boils down to an artistic choice more than anything else. Of course, the same test could be done with radical grading in contrast, color, density, to see the behavior of the image at its extremes.
As he broke down the image development parameters for us, Andy shared his choices with us, and provided examples. For demosaicing, he strongly recommends using the manufacturer’s version contained in their SDK. For scaling (changing the size of the image between the sensor’s resolution and the resolution of the program being created), there is no standard, and every algorithm will fare better or worse depending on how much resizing has to be done (if you’re shooting in 8K and intend to release the film in 2K, that is a very important parameter). So, you have to try things out, and each color grading software has its own “recipes” and possible choices.
FilmLight’s software seems very powerful. The texture adjustment will be something that it is important to set in function of the camera and the type of display. A general setting should be included in the project, with scene-by-scene modifications as needed. Don’t try to push the “sharpen” setting too far, as other tools can do the job better and are much less destructive (i.e. Texture Highlight). So the project should be conceived at zero fader position, which will result in a “honestly pleasing” image in the desired artistic direction and will also ensure a complete workflow for the project. This basic project should not be dependent on a specific type of output so as to allow for universal archiving that does not limit possible outputs in the future (another resolution, color space, HDR, etc.).
Then, it will be possible to work on the project’s look (or looks).
The idea is to always work on the image with global tools that ensure robustness for the entire project. For example, choosing to isolate a color means it will always have to be readjusted individually, scene-by-scene, with specific defaults that have to be monitored. Therefore, any more general solution for color should be preferred (Color Crosstalk, curves…) to allow for quicker work and better overall quality.
In order to fix a problem with the image, you always have to try and find where it comes from (at what point in the workflow this problem appears) and to try and fix it at that level, rather than constantly adding extra layers in color grading.
Don’t forget that different displays have different levels of definition in their rendering. For example, a DLP projector will have an (MTF) definition that is much lower in bright lights than an OLED or LCD screen. Different settings on the Texture Highlight tool will allow you to finely adjust that parameter depending on its final destination.
FilmLight provides us with an organization for color grading that follows these principles (from entry into the machine, on the top, to output, on the bottom) :
- Footage
- Scaling Algorithm
- Base Grade
- Compress Gamut
- Color grading layers by shot and by sequence…
- Look Layers
- Texture Highlight
- Texture Equalizer
- DRT
- CS Mastering
- White Point.
Of course, you have to be very careful about the viewing conditions in the grading room : reflections of light or color onto the display, possible pollution (emergency exit signs), flares that come from the system, the room, or the viewers !
Second Seminar
The next day, Daniele spoke again to remind us that HDR is not just an increase in the amount of light, it is also an increase in the dynamics of the display system as it increases the level of luminosity of whites and reduces the level of luminosity of blacks wherever possible (projection, LCD).
As regards shooting, HDR is closer to what is seen on set, and so the dosing of effects must be adapted (details can be more easily perceived in bright lighting), the color rendering of light and saturated colors is much more realistic and is not limited by the display.
Noise is more of an issue because it is more visible in HDR, and similarly, so is the higher definition of the image in which details, such as skin, for example, are more visible and might require special treatment (Texture Highlight Tool). Finally, in some shots, the center of interest might shift (lights inside the frame that are too strong or outdoor locations that are overly visible) and those also might require special treatment with more precise tools.
After all of the very technical talk, Filmlight decided to invite colorist Elodie Ichter and color scientist Matthieux Tomlinson, both of whom work for Harbor, to discuss the film Irishman, with cinematography by Rodrigo Prieto and directed by Martin Scorsese.
The film takes place over a long period of time and both main characters were digitally made to look younger. The cinematographer wanted to shoot the entire film in 35mm but the VFX insisted that the shots that required the actors to be made younger be shot with a digital camera. So there is a mix of images in sequences that was very successfully done.
Elodie and Matthieux were there to tell us about their work on the film from the color and postproduction perspectives. Rodrigo Prieto brought fixed or animated reference images before shooting began (paintings, photos, clips from films…) in order to provide food for thought. After discussing with Elodie, Matthieux built “base” looks. Rodrigo returned to grade tests that were shot on location or on set, but with costumes and pieces from the set (day, night, inside, outside, etc.). Elodie and Rodrigo graded the tests with the looks that Matthieux had proposed. In parallel, on a second Baselight, Matthieux would modify the looks in real time in response to Rodrigo’s comments, so that Elodie could make the corresponding changes to her grading. Four main looks came out of this process : Kodachrome, Ektachrome, ENR (bleach bypass positive), and an ENR that was overdeveloped by one stop. The interest of digital tools is that you can combine these looks and dose them scene-by-scene in percentage and action zones (densities).
Elodie then graded the dailies every day in her lab with these looks, “which is absolutely necessary to see what the film is going to look like during editing.” Shooting took place over the course of six months (108 days of shooting in total).
After editing, all of the 35mm shots were rescanned and conformed with the VFX that came from ILM, who also took care of adding graininess to the digital images. The scenes where the actors had to be made to look younger were shot with a RED Helium and Matthieux made the RED images look so much like the scans that Elodie couldn’t see any difference in reaction during color grading. She mainly used “printer lights” as a base tool (pulling points). Matthieux insisted on how important it is to always work in a “camera referred” space and only to convert the image to “display referred” (Rec 709, P3, or Rec 2020) when it comes time to produce the different formats. DI, with Yvan Lucas (Harbor), was easy and took four weeks (the film is 3.5 hours long) with the cinematographer there the entire time. The master was made in P3 SDR in projection.
There were then two weeks of grading for the HDR master and its various formats. Elodie found Dolby Vision’s trim pass to be quick and easy.
(Translated from French by A. Baron-Raiffe)