"There are several ways to deal with the RED's footage. Most of the main line non-linear editors have come up with methods for dealing with the work flow. I do think that over time, really, the RED was designed really to be a digital cinema camera. I think in a lot of cases the issue arose because of the price point on the camera. The camera came out at $17,500 which was cheaper than a lot of HD cameras. So I think you're going to see a lot of people who jumped on the band wagon, really thought it was the future, certainly the highest resolution image sensor that had come out. I like the idea that RED is about, power to the people, innovation, which is all great stuff.
But any time you have a raw image coming into a system, by the way this is not just the RED, it's also the SI-2K. Most people don't use the ARRI, the D-21 as a raw camera. Most people take that out and they feed it to HDCAM SR as 10 bit log. So it's already a bitmap when they're shooting. I don't personally know of any projects where the ARRI D-20 or the D-21 where it was used in raw mode, I'm sure there must be some.
But the SI-2K, and the RED are sort of both this bare, raw workflow where you're acquiring bare image, by keeping it raw you're dealing with a much smaller file than you would if you were shooting in a conventional way, if you had a camera with three 4k sensors, each one creating a full 4k color channel image you'd have a massive, massive file to deal with. So the idea with this bare sensor is if you do not de bare the material inside the camera and you store it, you're dealing with a nice, small, basically one third of the actual size of what the image would be in a conventional camera. So then you store that, now you take that into a post-production process. No matter whose post-production process by the way, it doesn't matter. At some point you need to demosaic that footage.
I work with Iridas SpeedGrade for instance, so I can load raw directly. I got a system where I have the proper display cards in Iridas' case. And video display cards support a lot of that processing that Iridas needs to do to use raw footage in real time. So I can literally do that demosaic on the fly. However you still have to do the demosaic at some point. Even if I take it into speed grade as raw, at some point I have to render it out as a flattened pixel based image. And that's an added step relative to shooting HDCAM, HDCAM SR, XDCAM, whatever it is that you want to shoot. Those are all flattened images, the pixels are all there. By the way I'm not talking about temporal compression versus iframe compression here, I'm just talking about whether or not the uncorrelated sensor data has been sort of committed.
So I think in a lot of cases, what's going to happen is people who have work flows that are traditionally video. There's a large part of the market that does corporate work, that's doing work for the web, that are doing projects with content that have purposes for other things in digital cinema who will find the extra steps introduced by this raw acquisition. Whether it's the RED or the SI or whatever. I think in a lot of these cases they will find that is a cumbersome work flow for anything other than digital cinema.
The advantage of raw for digital cinema is that you have a lot more image control, and with narrative film or something of that nature that image control can really, really be key to making your film work. It certainly gives you some leeway in dynamic range. However, if you're making corporate videos and that kind of thing, for instance talking about doing corporate work with the RED, that's great, nothing wrong with that. But I think for most people shooting with a camera that's got a 35mm size image sensor is not going to be terribly easy. It's not easy to do with a small crew. It's like shooting with a 35mm film camera, you need a focus puller. You can't optically focus it like a zoom lens in a video camera. It just won't work, the depth of field is too confining and too precise. You can't do it. That's why people don't walk around with ARRI 435s on their shoulder, sort of hand focusing a zoom lens. It doesn't work. You have to take a measurement and make sure that your focus is set up with the proper range in mind. That's why focus pullers never really look at view finders, they're dealing with measurements and timing and they know when the dolly hits here I have to be at this number and when the dolly moves to there, by the time they land on that mark I have to be at this other number. And that's how they do it.
So I think I'll be very interested to see what RED's 3k cameras do, those are going to be also extremely affordable. Click here to listen to the rest of Tim Kolb's interview.
If this link is not clickable please visit
Edited by James Smith, 08 June 2009 - 02:16 AM.