Tuesday, January 31, 2012

FIX University Welcome Friends & Family MCI Campus on the Moon´s Surface

The Descendants

Alexander Payne is a director who makes human stories with a blend of comedy and drama that we can all relate to. Movies like Sideways and About Schmidt are about discovery of self and audiences relate to the humor of awkward situations that echo familiar events in our own lives.

The Descendants continues this trend with Matt King (George Clooney), a Hawaiian lawyer and disengaged father who struggles with the realization that he will have to remove his wife from life support. She is in a coma as the result of a terrible boating accident. Payne has interwoven an additional storyline dealing with King’s extended family and their plans to sell a huge piece of unspoiled land to a large developer. The King family are descendants, distant relatives of native Hawaiians and non-native immigrants who settled the islands generations ago. The film’s title stems from this part of the story.

The Descendants was shot in about fifty days on location in Hawaii with the cutting being done in Hollywood (during principal photography) and later in Santa Monica (during post production). The production was on 35mm film, with Fotokem handling dailies and a digital intermediate finish at Modern VideoFilm. Alexander Payne is one of the few directors who has the right of final cut on his films and Kevin Tent (The Golden Compass, Sideways, About Schmidt, Blow) has been the editor on all of Payne’s films. Since the dailies that the production crew would view had to travel back to Hawaii and Hawaii is three hours behind Los Angeles, editorial got to see dailies before production. Payne would give Tent a daily call to get the rundown on how everything looked and sounded.

Tent described the post production schedule, “We’d sometimes get only a half day’s worth of dailies and other times a day-and-a-half. This would depend on the crew’s cut-off time to get the negative on a plane and then to the lab. I would assemble scenes and send them back to Alexander to watch over the weekends. When he got back to Los Angeles in June, we started working away. We had our first cut for the studio in late September. The first official audience preview was in late October and we finished the film by late February 2011.”

An Amtrak cut

Kevin Tent and first assistant editor Mindy Elliott cut on Avid Media Composers (version 4.0.5) connected to Unity shared storage. Dailies were delivered by Fotokem on HDCAM-SR tape for Fox Searchlight, as well as Avid DNxHD36 media on FireWire drives to be ingested into the Unity system. The FireWire drives came in handy later, because Clooney invited Payne and Tent to go to his villa in Italy for a couple of weeks. The two were able to continue cutting using a laptop and the FireWire drives both in Italy and subsequently on a cross-country train ride from New York back to Los Angeles.

“We’re laying claim to being the first film cut on an Amtrak train,” Tent joked. “Alexander had this great idea to take the train across the US on the return home. It was an old-style Hollywood romantic notion, where a writer would board the train in New York and when they arrived in Los Angeles, the movie script was done. Our two families had booked rooms in sleeper cars, which were large enough to spread out the laptop and the drives. This let me get some editing in during the two days on the train, but it’s awfully hard to concentrate on editing when you are going through some of the most gorgeous countryside in the US!”

Striking the right tone

Tent discussed some of his thoughts behind the editing of the film. “We tried to keep up the pace throughout the whole movie, but it’s the type of film in which you can’t shift the tone too quickly or you’ll lose the audience. Our feeling was that if you rushed it, the audience wouldn’t have time to absorb and feel the emotion. The balance between the drama and the humor was probably our biggest challenge. We’ve had similar challenges on Alexander’s other films, but The Descendants was a whole new level of trickiness. We had to be respectful of the characters and what they were experiencing. It’s about raw human emotion and about death – something most audience members can relate to in one way or another. So we scaled back and trimmed some of the humor, being very careful of anything, which might feel insensitive to our characters. Hopefully we struck a good balance and the humor feels like it could happen in real life.”

King’s wife is only seen outside of the coma and hospital bed in one shot at the beginning of the film. I asked Tent whether changes were made in the edit to shorten the scene. Tent responded, “No, it was never part of a longer or larger scene. Just that shot of Patricia Hastie [Elizabeth King] on the boat. Its purpose was to catch just a glimpse of a person’s life. Life is so fragile. She’s alive and vivacious one moment and the next… Patty did a pretty amazing job. Many people thought we used a mannequin in the later scenes, but it’s all Patty. She lost all that weight and never broke her character even when other actors were yelling at her.”

“The voice-over at the beginning was always scripted and we recorded much more than we used. In early screenings, our audiences were having a little trouble getting insight into George’s character, Matt. Alexander wrote a couple of new lines, which substantially changed the beginning of the film and the audiences’ understanding of Matt and his wife. The lines we added were, ‘Wake up Elizabeth… Wake up… I’m ready to be a husband and a father… I’m ready to talk.’ These simple lines were enormously effective. Our audiences now immediately understood the back-story, their troubled marriage, his disengaged parenting and probably most important, his desperation. It was interesting that such a simple change in a couple of lines could have such a big impact.”

“Initially there were more scenes in the hospital in which the Matt King character told us about being a lawyer and the land deal – all in the first ten or fifteen minutes of the movie. We cut them out and wound up waiting until after Matt and Scottie visit her little friend to apologize. The first time the audience hears anything about the land deal is from the mother of the little girl out on the front porch. We let her make the introduction and then we followed with a montage of dissolves of him working, looking at photos and the voice-over. This was a very organic way to firmly set up the new story line.”

The Descendents offers a real sense of Hawaiian authenticity. Instead of a film score produced by a single composer, Payne opted for a series of songs and tracks recorded by Hawaiian musicians. Dondi Bastone (music supervisor) and Richard Ford (music editor) combed through tons of local Hawaiian tracks to come up with the right feel. Many of the scenes play well with little or no music at all – just simple slack key guitar tracks to augment or accent a scene or transition between scenes.

Avid script-based editing

Kevin Tent has been cutting on Avid Media Composer systems since his transition from film editing. Tent said, “When cutting on film, you really had to think about the ramifications of the changes you were going to make. Cutting on film was a lot like playing chess. You’d have to have the whole board in mind before you’d make your move. But, I’d never go back. I love the Avid. It’s a brilliant piece of machinery. This is the first time I’ve used ScriptSync. It was fantastic and Alexander loved it, too. We’re constantly reviewing for performance and looking at our back-up takes. ScriptSync made this process so much easier.”

Mindy Elliott explained how ScriptSync was used on the film. “Each scene had a folder within the Avid project. Inside were the dailies bin and a script for just that scene. Preliminary scripting of the dailies was the main task of our apprentice editor, Mikki Levi. We didn’t really use the automatic features. Almost everything was done manually, which was determined by Alexander’s directing style. There are many ‘resets’ and ‘line repeats’ within a take, so we devised ways of marking that in the script. We also manually entered and scripted the voice-over, live musical performance and a lot of non-verbal action.”

Effects and the DI

Elliott also described their process for the DI finish and the handful of visual effects in the film. “We did a temp mix and color-correction pass (the picture was assembled off of tape using EDLs) for our two HD tape preview screenings. Our production assistant, Brian Bautista is a visual effects whiz. Using his After Effects and Photoshop skills, Brian did the preliminary work on the Hawaii maps (used when Matt and Scottie travel to The Big Island to pick up Alexandra – and when the whole clan goes to Kaui), green screen shots (plane and car windows when Matt and Scottie travel to The Big Island) and a time warp to extend the tail of a shot (when Matt disappears behind a hedge after spotting his wife’s lover). We inserted QuickTime versions of the temp effects for preview screenings and provided the templates for the finished work done by Nate Carlson (credit sequences and maps), Custom Film Effects (green screen shots, Banyan tree CGI) and Modern VideoFilm (split screen comps, time warp). Delivery for the DI at Modern VideoFilm was very much like delivering to a negative cutter, including a reference QuickTime for each reel, plus Pull Lists and Optical Pull Lists. We received ‘confidence’ check reels of the DI back from Modern that we loaded into the Avid to gang against our locked cut to make sure it all matched.”

Asked for some parting editing wisdom, Kevin Tent offered this humorous anecdote about his Amtrak experience. “My big take-away was that you can edit on the train and you can drink on the train, but you can’t drink and edit on the train. Nope… not so easy. I learned that one night after dinner and a bottle of wine in the dining car. We decided to go back to work afterwards. Trying to click on a tiny laptop with the combination of wine and the constant movement of the train – it was just too damn hard [laugh].”

Written for DV Magazine (NewBay Media, LLC)

©2012 Oliver Peters

DaVinci Resolve 8

Blackmagic Design’s acquisition of DaVinci has transformed this Ferrari into the preferred tool for desktop color grading, while still maintaining its stature. Resolve 8.1 comes in three Mac flavors – Resolve Lite (free), Resolve (paid, software-only) and the DaVinci Resolve Control Surface (software included). Blackmagic Design announced that a Windows version is in development for early 2012. All of these products use the same base software tools and features, except the Lite version is restricted to SD and HD sizes and doesn’t include the stereo 3D support or noise reduction of the paid versions.

The release of the 8.1.1 patch removed the limitation on the number of correction nodes possible in the Lite version. In short, the grading power is now the same between the free and the paid versions. Another welcomed change is that all versions now read AAF files and Avid MXF media, which had previously been a paid option.

Hardware configurations

I tested DaVinci Resolve 8.1 and 8.1.1 on my eight-core Mac Pro under Mac OS 10.6.8 and “Lion” 10.7.2. A change that came with 8.1 was relaxation of the minimum monitor resolution requirements, from the original spec of 1920 x 1080 down to 1680 x 1050 pixels (or higher). That works for my 20-inch Apple Cinemas and it also allows you to use Resolve on some of the MacBook Pro models.

One of the hallmarks of the Resolve software is that it can leverage the power of additional GPU cards. You can install one or more NVIDIA CUDA-enabled cards for accelerated performance and rendering, but this isn’t required for Resolve to work. When a second GPU card is present, Resolve offloads some of the image processing chores to the other card. Installation of additional GPU cards into a Mac Pro poses some issues. There are only four PCIe slots in the machine and only slot one permits a double-wide card. MacPros currently ship with either an ATI 5770 or an ATI 5870 display card. The only approved (and currently available) NVIDIA CUDA cards for the Mac are the Quadro 4000 and the Quadro FX4800. Not all Mac software – notably Apple Color – is compatible with multiple GPU cards installed into the tower.

DaVinci recommends several GPU configurations, but of these options, the most cost-effective combo is the ATI 5770 with the Quadro 4000. Both are single-wide cards, so this leaves you room for two more PCIe cards, such as a Red Rocket and a storage adapter. A Mac Pro can only run one card that requires auxiliary power, so you cannot use the upgraded 5870 together with the 4000, as each requires aux power connections to the motherboard. I tested Resolve in three GPU configurations: the ATI 5870 (my standard card) and the Quadro 4000 each by themselves, as well as the 4000 combined with my original NVIDIA GeForce GT120 display card. Resolve Lite only allows one extra GPU card, but the paid Mac and Linux versions let you run more. Since this poses slot limitations on the Mac Pro, DaVinci recommends the Cubix PCIe expansion chassis if you need to build a more powerful system.

Video I/O and control surfaces

DaVinci Resolve will only operate with certain Blackmagic Design capture cards. I installed the Decklink HD Extreme 3D card. At $995 it offers a wide range of HDMI, analog and digital connections and supports SD, HD, 2K and stereo 3D operation. It has built-in 3Gb/s SDI, 4:2:2 and 4:4:4 RGB.

The Decklink HD Extreme 3D card looks like a double-wide card, but actually half of the width is a bridge adapter for the HDMI connectors. If you don’t need HDMI, the adapter can be left off, so you only have one slot to worry about. Even if you need HDMI, but are tight on slots, you could still connect the HDMI cable to the back of the card and find a way to snake the cable out through some other opening in the Mac Pro chassis. Not ideal, but definitely functional.

Having a Decklink card is important for proper external monitoring, but the Resolve interface does include a full screen viewer. If the grading you do is only for the web, it could be viable to run Resolve without any video I/O card at all. On the other hand, if stereo 3D projects are in your future, then this is the card you’ll want.

DaVinci Resolve supports three third-party control surfaces in addition to Blackmagic Design’s own powerful, but expensive DaVinci Resolve Control Surface ($29,995). That’s an advanced three-panel unit designed with full-time, professional colorists in mind. If that’s a bit too rich for the blood, then you can chose from the Avid Artist Color, Tangent Devices Wave or the JL Cooper Eclipse CX panels. Control surfaces are nice, but Resolve is perfectly functional with only a mouse and keyboard.

Getting started

Installation went smoothly under both 10.6.8 and 10.7.2. The paid version uses a USB license key (dongle), which can be moved among several machines if you need to run Resolve on more than one Mac (not simultaneously). The Resolve software (including an SQL database used to store projects) and various Blackmagic Design utilities, codecs and FCP 7 Easy Set-ups are installed. You’ll need to rebuild or restore your FCP preferences afterwards. The Decklink installation will do the same, in addition to running a firmware update for the card itself. One gotcha with the Quadro 4000 card is that you have to install two pieces of software: the retail driver for the card along with a separate CUDA enabler.

Resolve is a very deep application designed for professional colorists. If you launch it without even browsing the manual, you are going to be clueless. The tabbed interface opens up many layers to the software, which are too complex to spell out here. Unlike other color correction tools and plug-ins, Resolve is designed to be a self-contained finishing environment, complete with VTR or file-based ingest and output capabilities using the Decklink hardware.

Resolve offers input and output LUTs for various formats, notably the ARRI ALEXA Log-C profile, as well as updated RED camera raw settings. Aside from color correction, other tools include conforming media to EDL, XML and AAF files, resizing, tracking and more. Fortunately the manual includes a Quick Start section. Assuming that you have a basic understanding of color correction software, you can be up and running and get your first project out the door on day one.

Performance

Resolve’s color correction model uses a node structure. The first node is your primary correction and then subsequent serial nodes are for secondary color correction. You can also introduce parallel nodes. For example, maybe you’d like to derive a mask from an earlier stage of correction, but introduce it later into the signal path. That mask is a parallel node.

I ran through a series of 23.98 and 25fps HD projects. Each used different ProRes or DNxHD codecs. With the ATI 5870 card and a standard timeline (one or two nodes on each clip), I was consistently able to achieve real-time playback. In order to test the value of a second GPU card, I built up a taxing 1:15-long timeline with six nodes on each clip (one primary plus five secondaries). Some of these nodes included blur or sharpening. Both the 5870 and 4000 cards by themselves played the sequence back at 15-16fps.

Changing to a two-card configuration (GT120 plus the Quadro 4000) only picked up about 2fps – playing at approximately 18fps. In all three configurations, this timeline took under two minutes to render. This performance is on par with Apple Color for simple projects and faster when you get more complex. Since NVIDIA CUDA cards offer more benefit on PCs than Macs, I would expect Resolve running under Windows next year to significantly outperform these tests.

One caveat is that the noise reduction module in the paid version of Resolve only works with a CUDA-enabled GPU card, such as the Quadro 4000. Unfortunately, the quality wasn’t better than similar filters, like Neat Video or Magic Bullet Cosmo, so I wouldn’t let that be a deciding factor. For me, having a second GPU doesn’t justify the purchase of an extra card, especially when you consider the trade-offs. Unless you plan to build a dedicated color grading suite around Resolve instead of a general purpose editing workstation, you’re probably better off with one high-end ATI or NVIDIA card.

Impressions

New features added to Resolve include multi-layered timelines, FCP XML support for FCP X and a color wheels panel. In Resolve 7, timelines were a flattened, single-layer video track. If your FCP or Avid sequence consists of several video tracks, those will now show up as corresponding layers in the Resolve timeline. The new color wheels tab makes the interface more consistent with conventional color correction interface design.

A huge selling point for Resolve is the depth of roundtrip support. I was able to import and grade various edited sequences from Final Cut Pro 7, X and Avid Symphony 6. Since Resolve reads and writes the two formats of XML used by FCP X and FCP “classic”, you can actually send simple sequences from one NLE to the other. Edit in FCP X, grade in Resolve, render and send to FCP 7. Or the other way around. The enhanced Avid support makes Resolve an excellent tool to augment Media Composer’s built-in color correction mode. Not to mention that both applications can now also run on the same Blackmagic Design hardware.

I found Resolve relatively easy to learn and use. The controls and interaction are very responsive, with little delay between moving a slider, curve or wheel and seeing the update in the viewer, on the screen and/or on the scopes. The interface is optimized only for a single display, which I don’t prefer. You can send the scopes to the secondary display, but that’s it. This leaves you with a complex UI on the main screen. Your principal color correction tools appear in smaller, tabbed sections at the bottom. If you build a dedicated Resolve suite, consider one of the Apple 27-inch displays for your system. Although, Resolve is resolution independent, the Mac configurations are really designed for HD and possibly 2K projects. I think it’s unrealistic to try to push complex, native 4K projects through such a workstation.

Truthfully, I prefer the interface design of Apple Color, but DaVinci Resolve’s toolset is a much better fit for the post production landscape of 2012. Editors who aren’t ready to invest the time to learn the application are better served by the FCP X Color Board or plug-ins like Magic Bullet Colorista II or Sapphire Edge. If you do spend a little time learning Resolve, you’ll end up with one of the best values available in post today. Even the free Resolve Lite places the controls of a multi-hundred-thousand-dollar DI system at your fingertips.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

The Girl with the Dragon Tattoo

The director who brought us Se7en has tapped into the dark side again with the Christmas-time release of The Girl with the Dragon Tattoo. Hot off of the success of The Social Network, director David Fincher dove straight into this cinematic adaptation of Swedish writer Steig Larsson’s worldwide publishing phenomena. Even though a Swedish film from the book had been released in 2009, Fincher took on the project, bringing his own special touch.

The Girl with the Dragon Tattoo is part of Larsson’s Millennium trilogy. The plot revolves around the disappearance of Harriet Vanger, a member of one of Sweden’s wealthiest families, forty years earlier. After these many years her uncle hires Mikael Blomkvist (Daniel Craig), a disgraced financial reporter, to investigate the disappearance. Blomkvist teams with punk computer hacker Lisbeth Salander (Rooney Mara). Together they start to unravel the truth that links Harriet’s disappearance to a string of grotesque murders that happened forty years before.

For this production, Fincher once again assembled the production and post team that proved successful on The Social Network, including director of photography Jeff Cronenweth, editors Kirk Baxter and Angus Wall and the music scoring team of Trent Reznor and Atticus Ross. Production started in August of last year and proceeded for 167 shooting days on location and in studios in Sweden and Los Angeles.

Like the previous film, The Girl with the Dragon Tattoo was shot completely with RED cameras – about three-quarters using the RED One with the M-X sensor and the remaining quarter with the RED EPIC, which was finally being released around that time. Since the EPIC cameras were in their very early stages, the decision was made to not use them on location in Sweden, because of the extreme cold. After the first phase in Sweden, the crew moved to soundstages in Los Angeles and continued with the RED Ones. The production started using the EPIC cameras during their second phase of photography in Sweden and during reshoots back in Los Angeles.

The editing team

I recently spoke with Kirk Baxter and Angus Wall, who as a team have cut Fincher’s last three films, earning them a best editing Oscar for The Social Network as well as a nomination The Curious Case of Benjamin Button. I was curious about tackling a film that had already been done a couple of years before. Kirk Baxter replied, “We were really reacting to David’s material above all, so the fact that there was another film about the same book didn’t really affect me. I hadn’t seen the film before and I purposefully waited until we were about halfway through the fine cut, before I sat down and watched the film. Then it was interesting to see how they had approached certain story elements, but only as a curiosity.”

As in the past, both Wall and Baxter split up editorial duties based on the workload at any given time. Baxter started cutting at the beginning of production, with Wall joining the project in April of this year. Baxter explained, “I was cutting during the production to keep up with camera, but sometimes priorities would shift. For example, if an actor had to leave the country or a set needed to be struck, David would need to see a cut quickly to be sure that he had the coverage he needed. So in these cases, we’d jump on those scenes to make sure he knew they were OK.” Wall continued, “This was a very labor intensive film. David shot 95% to 98% of everything with two cameras. On The Social Network they recorded 324 hours of footage and selected 281 hours for the edit. On Dragon Tattoo that count went up to 483 hours recorded and 443 hours selected!”

The Girl with the Dragon Tattoo has many invisible effects. According to Wall, “At last count there were over 1,000 visual effects shots throughout the film. Most of these are shot stabilizations or visual enhancements, such as adding matte painting elements, lens flares or re-creating split screens from the offline. Snow and other seasonal elements were added to a number of shots, helping the overall tone, as well as reinforcing the chronology of the film. I think viewers will be hard pressed to tell which shots are real and which are enhanced.” Baxter added, “In a lot of cases the exterior locations were shot in Sweden and elaborate sets were built on sound stages in LA for the interiors. There’s one sequence that takes place in a cabin. All of the exteriors seen through the windows and doors are green screen shots. And those were bright green! I’ve been seeing the composited shots come back and it’s amazing how perfect they are. The door is opened and there’s a bright exterior there now.”

A winning workflow solution

The key to efficient post on a RED project is the workflow. Assistant editor Tyler Nelson explained the process to me. “We used essentially the same procedures as for The Social Network. Of course, we learned things on that, which we refined for this film. Since they used both the RED M-X and the EPIC cameras, there were two different frame sizes to deal with – 4352 x 2176 for the RED One and 5120 x 2560 for the EPIC. Plus each of these cameras uses a different color science to process the data from the sensor. The file handling was done through Datalab, a company that Angus owns. A custom piece of software called Wrangler automates the handling of the RED files. It takes care of copying, verifying and archiving the .r3d files to LTO and transcoding the media for the editors, as well as for review on the secured PIX system. The larger RED files were scaled down to 1920 x 1080 ProRes LT with a center-cut extraction for the editors, as well as 720p H.264 for PIX. The ‘look’ was established on set, so none of the RED color metadata was changed during this process.”

“When the cut was locked, I used an EDL [edit decision list] and my own database to conform the .r3d files back into reels of conformed DPX image sequences. This part was done in After Effects, which also allowed me to reposition and stabilize shots as needed. Most of the repositioning was generally a north-south adjustment to move a shot up or down for better head room. The final output frame size was 3600 x 1500 pixels. Since I was using After Effects, I could make any last minute fixes if needed. For instance, I saw one shot that had a monitor reflection within the shot. It was easy to quickly paint that out in After Effects. The RED files were set to the RedColor2 / RedLogFilm color space and gamma settings. Then I rendered out extracted DPX image sequences of the edited reels to be sent Light Iron Digital who did the DI again on this film.”

On the musical trail

The Girl with the Dragon Tattoo leans heavily on a score by Trent Reznor and Atticus Ross. An early peak came from a teaser cut for the film by Kirk Baxter to a driving Reznor cover of Led Zeppelin’s “Immigrant Song”. Unlike the typical editor and composer interaction – where library temp tracks are used for the edit and then a new score is done at the end of the line – Reznor and Ross were feeding tracks to the editors during the edit.

Baxter explained, “At first Trent and Atticus score to the story rather than to specific scenes. The main difference with their approach to scoring a picture is that they first provide us with a library of original score, removing the need for needledrops. It’s then a collaborative process of finding homes for the tracks. Ren Klyce [sound designer/re-recording mixer] also plays an integral part in this.” Wall added, “David initially reviewed the tracks and made suggestions as to which scenes they might work best in. We started with these suggestions and refined placement as the edit evolved. The huge benefit of working this way was that we had a very refined temp score very early in the process.” Baxter concluded, “Then Trent’s and Atticus’s second phase is scoring to picture. They re-sculpt their existing tracks to perfectly fit picture and the needs of the movie. Trent’s got a great work ethic. He’s very precise and a real perfectionist.”

The cutting experience

I definitely enjoyed the Oscar-winning treatment these two editors applied to intercutting dialogue scenes in The Social Network, but Baxter was quick to interject, “I’d have to say Dragon Tattoo was more complicated than The Social Network. It was a more complex narrative, so there were more opportunities to play with scene order. In the first act you are following the two main characters on separate paths. We played with how their scenes were intercut so that their stories were as interconnected as possible, giving promise to the audience of their inevitable union.”

“The first assembly was about three hours long. That hovered at around 2:50 for a while and got a bit longer as additional material was shot, but then shorter again as we trimmed. Eventually some scenes were lost to bring the locked cut in at two-and-a-half hours. Even though scenes were lost, those still have to be fine cut. You don’t know what can be lost unless you finish everything out and consider the film in its full form. A lot of work was put into the back half of the film to speed it up. Most of those changes were a matter of tightening the pace by losing the lead-in and lead-outs of scenes and often losing some detail within the scenes.”

Wall expanded on this, “Fans of any popular book series want a filmed adaptation to be faithful to the original story. In this case, we’re really dealing with a ‘five act’ structure. [laughs]. Obviously, not everything in the book can make it into the movie. Some of the investigative dead ends have to be excised, but you can’t remove every red herring. So it was a challenging film to cut. Not only was it very labor intensive, with many disturbing scenes to put together, it was also a tricky storytelling exercise. But when you’re done and it’s all put together, it’s very rewarding to see. The teaser calls it the ‘feel-bad film of Christmas’ but it’s a really engaging story about these characters’ human experience. We hope audiences will find it entertaining.”

Some additional coverage from Post magazine.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF). Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie) called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Hugo

The newest stereo 3D film sensation promises to be Martin Scorsese’s Hugo, just in time for the holidays. The film is the director’s first 3D venture and is based on The Invention of Hugo Cabret, a children’s graphic novel written and illustrated by Brian Selznick. It’s the story of twelve-year-old Hugo, an orphan who lives in the walls of a busy Paris train station. Hugo gets wrapped up in the mystery involving his father and a strange mechanical man.

Scorsese – who’s as much a film buff as an award-winning director – has a deep appreciation for the art form of past 3D films, like Dial M for Murder. In adapting this fantastical story, Scorsese and his Oscar-winning team have an ideal vehicle to show what stereo 3D can do in the right hands and when approached with care. Unlike the groundbreaking Avatar, which relied heavily on motion capture and synthetic environments, Hugo is a more cinematic production with real sets, actors and is based on the traditional language of filmmaking.

Hugo started production in 2010 using then-prototype ARRI ALEXA cameras, which were configured into special 3D camera rigs by Vince Pace. The ALEXA was the choice of cinematographer Bob Richardson for its filmic qualities. Camera signals were captured as 1920 x 1080 video with the Log-C color profile to portable HDCAM-SR recorders. Hugo will be the first 3D release produced with this particular equipment complement. With post for Hugo in its final stages, I had a chance to speak with two of Scorsese’s key collaborators, Rob Legato (visual effects supervisor and second unit director of photography) and Thelma Schoonmaker (film editor).

Developing the pipeline

Rob Legato has been the key to visual effects in many of Scorsese’s films, including The Aviator. For Hugo, Legato handled effects, second unit cinematography and, in fact, developed the entire start-to-finish stereo 3D post pipeline. Legato started our conversation with the back story, “I had done a small film with the ARRI D-21 and Bob [Richardson] loved the look of the camera. He liked the fact that it was produced by a traditional film camera manufacturer, so when the ALEXA came out, he was very interested in shooting Hugo with it. In order to make sure that the best possible image quality was maintained, I developed a DI workflow based on maintaining all the intermediate steps up to the end in log space. All effects work stayed in log and dailies color correction was done in log, so that no looks were baked in until the final DI stage. We used LUTs [color look-up tables] loaded into [Blackmagic Design] HDLink boxes for monitoring on-set and downstream of any the visual effects.”

“The original dailies were color corrected for editorial on a Baselight unit and that information was saved as metadata. We had both an Avid Media Composer and a Baselight system set up at my home facility, The Basement. Thelma cuts on Lightworks, but by mirroring her edits on Media Composer, I had the information in a form ready to disperse to the visual effects designers. I could load the color grades developed by Marty, Bob and the colorist for each scene into my Baselight, so that when I turned over finished VFX shots to Thelma, they would have the same look applied as those shots had from the dailies. That way a VFX shot wouldn’t be jarring when Thelma cut it back into the sequence, because it would match the same grade.”

Working in the language of stereo 3D

The key to the look of Hugo is the care put into the stereo 3D images. In fact, it’s very much a hand crafted film. Legato continued, “All the 3D imagery was done in-camera. You could never accomplish this type of look and emotional feel with post production rotoscoping techniques used to turn 2D films into 3D. Stereo was designed into the film from the very beginning. Not 3D gags, but rather a complete immersive style to the sets, lighting, camera moves and so on. Marty and Bob would watch the shots on set in 3D wearing their glasses. Performances, lighting, stereography and the position of items in the set were all tweaked to get the best results in 3D. The sets were designed for real depth, including elements like steam and particles in the air. You feel what it’s like to be in that space – emotionally. In the end, the story and the look are both a real love affair with motion pictures.”

One of the common complaints stereo 3D critics offer is that cinematographers cannot use shallow depth-of-field for storytelling. Legato responded, “Marty and Bob’s approach was to create those depth cues through lighting. We erred on the side of more focus, not less – more in the style of Gregg Toland’s work on Citizen Kane. Monitoring in stereo encouraged certain adjustments, like lighting little parts of the set in the background to gain a better sense of depth and control where the audience should focus its attention. That’s why stereoscopic post on 2D films doesn’t work. You cannot put off any part of the art form until later. You lose the taste of the artists. You lose the emotional advantage and the subtlety, because the process hasn’t been vetted by decisions made on the set through staging.”

A tailored approach

At the time of this interview, the film was in the final stages of stereo adjustments and color grading. Legato explained, “Unlike a 2D film, the finishing stage includes a final pass to tweak the 3D alignment. That is being handled by Vince Pace’s folks with Marty and Bob supervising. When they are done, that information will go to the colorist to be integrated into the grade. Greg Fisher has been our colorist throughout the film. Often you don’t have the same colorist for dailies as for the DI, but this is a color workflow that works best for Bob. By establishing a look during dailies and then carrying that data to the end with the same colorist – plus using Baselight at both ends – you get great continuity to the look. We tailored the most comfortable style of working for us, including building small 3D DI theaters in England and New York, so they could be available to Marty where he worked. That part was very important in order to have proper projection at the right brightness levels to check our work. Since the basic look has already been establish for the dailies, now Greg can concentrate on the aesthetics of refining the look during the DI.”

Cutting in 3D

Thelma Schoonmaker has been a close collaborator with Martin Scorsese as the editor for most of his films. She’s won Best Editing Oscars for The Departed, The Aviator and Raging Bull. Some editors feel that the way you have to cut for a stereo 3D release cramps their style, but not so with Schoonmaker. She explained, “I don’t think my style of cutting Hugo in 3D was any different than for my other films. The story really drives the pace and this is driven by the narrative and the acting, so a frenetic cutting style isn’t really called for. I didn’t have to make editorial adjustments based on 3D issues, because those decisions had already been made on set. In fact, the stereo qualities had really been designed from take to take, so the edited film had a very smooth, integrated look and feel.”

Often film editors do all their cutting in 2D and then switch to 3D for screenings. In fact, Avatar was edited on an older Avid Media Composer Adrenaline system without any built-in stereo 3D capabilities. Those features were added in later versions. Hugo didn’t follow that model. Schoonmaker continued, “I cut this film in 3D, complete with the glasses. For some basic assemblies and roughing out scenes, I’d sometimes switch the Lightworks system into the 2D mode, but when it came time to fine-cut a scene with Marty, we would both have our glasses on during the session and work in 3D. These were flip-up 3D glasses, so that when we turned to talk to each other, the lenses could be flipped up so we weren’t looking at each other through the darker shades of the polarized glass.”

Thelma Schoonmaker has been a loyal Lightworks edit system user. The company is now owned by EditShare, who was eager to modify the Lightworks NLE for stereo 3D capabilities. Schoonmaker explained, “The Lightworks team was very interested in designing a 3D workflow for us that could quickly switch between 2D and 3D. So, we were cutting in 3D from the start. They were very cooperative and came to watch how we worked in order to upgrade the software accordingly. For me, working in 3D was a very smooth process, although there were more things my two assistants had to deal with, since ingest and conforming is a lot more involved.”

Prior to working on Hugo, the seasoned film editor had no particular opinion about stereo 3D films. Schoonmaker elaborated, “Marty had a very clear concept for this film from the beginning and he’s a real lover of the old 3D films. As a film collector, he has his own personal copies of Dial M for Murder and House of Wax, which he screened for Rob [Legato], Bob [Richardson] and me with synced stereo film projection. Seeing such pristine prints, we could appreciate the beauty of these films.”

Editing challenges

The film was shot in 140 production days (as well as 60 second-unit days) and Thelma Schoonmaker was cutting in parallel to the production schedule. Principal photography wrapped in January of this year, with subsequent editing, effects, mix and finishing continuing into November. Schoonmaker shared some final thoughts, “I’m really eager to see the film in its final form like everyone else. Naturally I’ve been screening the cuts, but the mix, final stereo adjustments and color grading are just now happening, so I’m anxious to see it all come together. These finishing touches will really enhance the emotion of this film.”

Hugo is a fairy tale. It is narrative-driven versus being based on characters or environments. That’s unlike some of Scorsese’s other films, like Raging Bull, Goodfellas or The Departed, where there is a lot of improvisation. Marty injected some interesting characters into the story, like Sacha Baron Cohen as the station inspector. These are more fleshed out than in the book and it was one of our challenges to weave them into the story. There are some great performances by Asa Butterfield, who plays Hugo, and Ben Kingsley. In fact, the boy is truly one of a great new breed of current child actors. The first part of the film is practically like a silent movie, because he’s in hiding, yet he’s able to convey so much with just facial emotions. As an editor, there was a challenge with the dogs. It took a lot of footage to get it right [laughs]. Hugo ends as a story that’s really about a deep love of film and that section largely stayed intact through the editing of the film. Most of the changes happened in the middle and a bit in the front of the film.”

From the imagery of the trailers, it’s clear that Hugo has received a masterful touch. If, like me, you’ve made an effort to skip the 3D versions of most of the recent popular releases, then Hugo may be just the film to change that policy! As Rob Legato pointed out, “Hugo is a very immersive story. It’s the opposite of a cutty film and is really meant to be savored. You’ll probably have to see it more than once to take in all the detail. Everyone who has seen it in screenings so far finds it to be quite magical”.

Some addition stories featured in the Editors Guild Magazine, Post magazine, another from Post and from FXGuide.

Written for DV magazine (NewBay Media, LLC)

© 2011

Fernando IX University
Locations of visitors to this page

Monday, January 30, 2012

1st Gradution Class of the Year 2013 "Pakiko Ordoñez Director"

Proceso Matrícula Académica:

1. Solicitar a escueladecinedigitalcali@hotmail.com el formulario de inscripción, diligenciarlo y
reenviarlo
2. Coordinar entrevista del aspirante con el Director.
3. El Director enviará carta personalizada de admisión, si el aspirante
tiene el perfil requerido para ingresar a la Escuela de Cine.
4. El estudiante admitido debe enviar carta de aceptación a escueladecinedigitalcali@hotmail.com como reserva inicial de cupo.
5. Coordinar entrevista con la Gerente y el tutor del
estudiante.

Requisitos
para el estudiante:

Ser Mayor de 17 años de edad.
1. Copia Diploma de bachiller y Acta de Grado
2. Copia Certificados universitarios u otros estudios , si los
tiene.
3. Copia documento de identidad al 150%
4. Constancia de afiliación al sistema de salud
5. Dos fotos tamaño cédula (3x4). (para matrícula académica y carnet
estudiantil)



TERCER CONCURSO DE GUIÓN PARA
CORTOMETRAJES DE FICCIÓN

Fernando IX University
Locations of visitors to this page

Tuesday, January 24, 2012

Anexo - Dexanexo Metropolitano on Campus? FIX

Fallan acción de tutela a favor de la Gobernación

Cali, Enero 23 de 2012.- El Juzgado Trece
Administrativo del Circuito de Cali negó, por improcedente, la acción de tutela
interpuesta por el abogado José Ignacio Arango Bernal en representación del
Comité Pro-Rozo Municipio No. 43 por presunta vulneración del derecho al debido
proceso y a la participación ciudadana.

El Secretario de Planeación Departamental, Héctor Copete,
señaló que con este fallo a favor de la Gobernación del Valle del Cauca se
reafirma el concepto emitido por la administración departamental sobre la
inconveniencia económica y social y la inviabilidad de crear un nuevo municipio
en el Valle del Cauca a partir de la escisión de los corregimientos de Rozo, La
Acequia, La Dolores, Las Herradura, La Torre, Matapalo, Obando y Palmaseca del
Municipio de Palmira.

“La decisión adoptada por la administración departamental no
fue discrecional, ni es incompleto el estudio técnico elaborado por la
Universidad del Valle que sirvió de fundamento para la toma de la decisión sin
que ello signifique que se haya reemplazado la competencia de la Secretaría de
Planeación Departamental” resaltó el funcionario.

Agregó que para el efecto se cumplieron todas las exigencias de
la Ley 617 de 2000 y la Secretaría de Planeación Departamental orientó su
actividad con base en la “Metodología para elaborar el estudio sobre la
conveniencia económica y social y viabilidad de crear un nuevo municipio”
elaborada por el Departamento Nacional de Planeación, DNP.

Fernando IX University
Locations of visitors to this page

Followers

Blog Archive