3D on the cheap(ish)

3D on the cheap(ish)

I'll preface this post with the disclaimer that this isn't a good way to shoot stereo and I wouldn't shoot anything I was getting paid to deliver with this setup. But I love experimenting and right now I'm looking for ways to independently shoot high quality stereo images that doesn't involve thousands and thousands in equipment rentals, insurance out the wazoo, and all the accompanying personnel to make it go.

For stereo acquisition, using DSLR's in the most cost effective beamsplitter we could find was certainly an avenue worth exploring but I'll say it again, a less than ideal way to get there. 

Peter Clark from Attic Studios and I did these stereo tests with Canon 5D's 3 or 4 months ago and I've been meaning to write this post ever since. Sort of like these NAB interviews that are sitting here in a Final Cut bin ;(

The company who provided the beam splitter, 3D Film Factory, wrote this blog post awhile back, publishing a truncated version of our findings. This post will attempt to go a little deeper into how we did it, what worked, and what didn't.

Here's how it all breaks down -

These are the problems with using DSLR's for stereo acquisition -

No way to genlock them, or in other words, force both sensors to begin scanning at the exact same moment in time. If you don't have genlock, you don't have 3D. Temporal offsets will kill the stereoscopic illusion so quickly derails any attempts to make 3D. Also, no timecode. While this isn't necessarily a deal breaker, it certainly makes syncing the left and right eye images together a heck of a lot easier. But as we discovered, there are some practical workarounds. 

These DSLR cameras are notorious for their Rolling Shutters. It's bad but only slightly worse than the Red One which was and still is to a certain extent, very commonly used for 3D. I basically approached using the Mark 2's in stereo like I would for Red One's - just make sure the sensors are both scanning in the same direction which means mounting the reflected eye camera from the top instead of the bottom (more on that in a bit).

The goal of this test was to try and use somewhat readily available or cheaply rented articles and to use software that most people are already using such as Final Cut Pro and Plural Eyes, without having to spend a bunch of money on custom plugins and codecs. Of course in the end, all of this bric-a-brac added up to a hefty sum and for all of the headaches and work arounds needed to make it function, in my mind it's not worth it. You're better off sweet talking someone with deep pockets into financing your "dream" 3D project and just renting a couple of Sony F3's and an Element Technica Pulsar. There's a reason we have pro gear in this business. Time saved not trying to make the hoopty rig go is time spent crafting the images the client is paying for. 

Here's what we used to make this happen:

IMG_4923.jpg

2x Canon 5D Mark 2's with Battery Grips so to avoid taking the cameras out of the rig to change batteries. 

AC Adapters would have been better but we didn't have them. Once the cameras are in, don't touch 'em! Aligning the Film Factory rig is a brutal chore. Make sure to record sound because this workflow needs audio tracks on both cameras to work. 

1x 3D Film Factory Beamsplitter Rig. 

IMG_4929.jpg

Gotta make sure that mirror is at 45 degrees. That's a great place to start. 

I'm not sure which Film Factory Rig we had, might have been the Mini. The rig itself is made up of readily available 80/20 Aluminum tubing, I would guess about 5 or 600 hundred dollars worth which means there's a significant margin on the rig if you were to buy one. While apparently an alignment plate does exist that will allow you to adjust pitch, height, and roll so as to match the position of one camera to the other, we didn't have it for this test. Without those adjustments, you're really up against the wall as far as your alignment goes. I was able to approximate an alignment using video overlays but there's not way to correct for foreground to background vertical offsets like using Z, Pitch, and Roll adjustments on an Element Technica rig. So in other words, if you do find yourself working with the 3D Film Factory, make sure to get the "Specialty Plate". 

2x Canon 50mm Lenses set to manual. 

A little wider would have been better but once again, this is what we had to work with. 

2x Pocket Wizards with Sync Shutter Cables. Very important.

Pocket_Wizard_Plus_II.jpeg

Here's where we got theoretical - As I mentioned, one of the main issues with using DSLR's for stereo is that there is no way to genlock them together. In order to create the illusion of binocular vision, the sensors on both cameras we're using need to start scanning at exactly the same moment in time. It's like syncing cameras together for live tv, all your AV devices need to be hitting on the same cylinder so that when the switcher goes to that camera, it's an instantaneous switch and not a frame or two or black or garbled signal while phase is being found. DSLR's have no ability on their own to be locked to an external signal but there's a device called a Pocket Wizard that's used to lock the shutters of multiple cameras to a strobe light. It works great for shooting stills so we thought that if we used the Pocket Wizard to shoot a few stills while video was rolling, theoretically that would align the shutters within the level of tolerance. After trying it a few times, we found that to be the case. It does work quite well actually and all the stereo video we shot with the Mark 2's had zero temporal sync issues. 

1x Heavy Tripod Legs to get the rig on to. It ain't light so a set of standard Ronford Bakers or heavy duty Sachtlers.

1x Consumer grade 3DTV. We had a Panasonic Viera with Active Shutter Glasses. I was hoping to be able to monitor the 3D images in real time on this display but there were some snafu's.

Here's the rig up and running. We had to use duvetyne scraps and black gaffer to make it light tight. Looking good.

IMG_4980.jpg
IMG_4972.jpg

2x Blackmagic HDMI to SDI Mini Converters. I needed these to convert the camera's HDMI out to SDI for use with the new AJA Hi5 3D "mini muxer". It takes 2 discrete SDI signals (i.e., left and right eye) and muses them into a single 1920x1080 raster that can be output in side by side, interleaved, etc for monitoring in stereo. This is a sweet little box and it would have been perfect if the Mark 2 didn't output some funky, irregular video signal. I couldn't get the box to take the converted signals for more than a few seconds. We tested with other SDI sources though and it works great so it was on to Plan B.

1x Leader LV5330 multi SDI monitor. 

This is probably the most expensive thing used on this test and is certainly not your typical "indie" tool. Because the muxer didn't like the Mark 2 signal I had no way of monitoring stereo in realtime but with my scope I knew that I could at least align the 2 cameras in the rig to the ZERO position. In 3D, you have to start from zero, both cameras must be seeing approximately the same frame and from there you can separate them to create interocular distance, thus creating stereo images. The Leader can take 2 SDI sources and freeze frames so what I did was set the left eye position, froze the frame, and then switched to the right eye and adjusted the position until it matched the overlay of the other camera. A crude alignment but a successful one. If you can't monitor both eyes simultaneously, I don't how else you would align other than the freeze frame method. The alignment was incredibly frustrating and involved wedging and shimming, sliding and taping. Basically creating a Frankensteinish like creation just to get a semblance of an alignment and this was only for the foreground. Fortunately the deepest thing in our scene was only about 25 feet from the lens so we were in the margin or error. I could tell by looking at it though that there were major offsets and if we had been outside with deep backgrounds, we would have been in trouble. 

While I was at, I also used the scope to match the picture profile of one camera to the other. There's always a lot of green/magenta shift when going through the beamsplitter mirror so if you have the means to correct for it at the source, it's always a good idea. I think for this I used my usual preferred Picture Style - Faithful with Contrast all the way down and Saturation down a few points. I then used the White Balance and Tint controls to dial in the best match I could create for the pair. 

Once I had Zero, I measured the lens to subject, lens to foreground, and lens to background distance. I then used the iPhone app, IOD Calc to find an Interocular distance that would put me safely within this range. Because there was no way to Converge, or tow-in the cameras without ruining the alignment, I just left them in Parallel knowing that I could always adjust the convergence in post. I set my IO distance and we were ready to shoot. Because there was no way to monitor in 3D we just kind of winged it and hoped for the best. 

IMG_5164.jpg

POST:

1x license of Final Cut Pro / Compressor. 

We wanted to see stereo bad so after shooting a few tests, we took the Left and Right eye images into Compressor and made ProRes files making sure that the audio recorded to the cameras was embedded in the new files. 

1x license of PluralEyes. 

Next we imported all of the transcoded material into FCP, and set up a timeline for PluralEyes sync. Because the audio is the same on both left and right eye images, PluralEyes does a frame accurate sync and from there it's only a matter of getting them into a stereo pair somehow. 

fcp_3d.jpg

There are a lot of ways to do this but I didn't want to spend any money on plugins so I thought how can we easily create a Side by Side 1920x1080 video right in FCP. It's actually incredibly easy. PluralEyes put the 2 image streams in 2 separate video tracks and synced them together so they're on the same frame. Now take the left eye video, in the Viewer go to the Motion tab, go to Distort and then in Aspect Ratio type in 100. You now have an anamorphically squeezed one half of a stereo video signal. You need to get it on the left side though so in Center type -480. This will place it on the left edge of frame and it occupy exactly 50% of it. Now with the Right Eye, first do a little Flip Flop Effect to get it in the right orientation. Any time you're dealing wtih mirrors there is always image inversion. In the right eye video's parameters do the same thing but in Center type 480 instead of -480. You now have a stereo pair in your timeline. You can't really make screen plane adjustments to them, at least not easily, and in order to edit them, they would need to be output again so that 2 images get baked into the same raster. But it does work and if you have FCP, you don't need anything else. 

1x Matrox MXO2 LE. 

Now the fun part - Watching your stuff in 3D. In order to watch stereo from your timeline, you've got to have some sort of external hardware that will get the video signal off you computer and into an HDMI or SDI cable. I've had the Matrox box for a few years and too my delight, they keep adding functionality to it free of charge all the time. They recently added 3D support to the HDMI output so if you have Side by Side media in FCP, the Matrox can send it to a HDMI receiver like a 3DTV and flag the signal as stereo so that it knows how to display it. This worked great with the DIY Side by Side's I made in FCP and we were watching stereo from my laptop in realtime. Awesome. 

IMG_5020.jpg
andrea1.jpg

Model: Andrea Grant

That's it in a nut shell. It's always fun to experiment with this stuff. Like I said, this wouldn't be my first choice but in a pinch, you could make it work. 

Peter will be publishing the stereo video from these tests online at some point. I'll post when I have it. 

Awesome.

Awesome.

Fast and easy 3D that won't break the bank. 

I've had mixed experiences with the Panasonic 3D-A1 but I finally (and luckily) found a shooting scenario that was just the right dimensions for its somewhat limiting 2.5" Interocular distance -

Inside the tent at the Big Apple Circus.

The lens to subject and lens to background distances fit just within the camera's "stereo sweetspot" and I was able to capture beautifully dimensional images free of parallax violations. 

circus1.jpg
circus2.jpg

20 mbps (or something like that) AVCHD coming from 1/4" imagers isn't exactly the highest quality images but it's good enough for a lot of applications as long as you're careful with the exposure. Thankfully the camera has the same useful picture profile settings as other Panasonic cameras in its class to help you with shadow and highlight rendering. 

Also - I used the new Sony LMD-2451TD 24" Passive 3D Display for the shoot. A perfect field monitor. It uses circular polarization so your RealD glasses will work with it. The quality of the stereo is so much better than the equivalent Panasonic version. Seeing as how the only other option for such a high quality 3D display is to drag the JVC or LG TV out into the field, I'd say the Sony monitor is now the best thing going. 

I pulled the trigger on Cineform Neo despite my last post and it's the perfect solution for this project. I imported the camera media into First Light with the click of a button and in an hour I had Cineform 3D files containing both left and right eye images that can be non-destructively adjusted at any time. I brought the material into FCP, hooked up the Matrox MXO2 to my LG 3DTV and was editing and adjusting convergence in real time immediately from my laptop. 

So f-n cool. There's no other way to put it. 

Panasonic-AVCCAM-AG-3DA1-Full-HD-3D-Professional-Camcorder-angle.jpeg
mxo2_le_max_jpg__66389_zoom.jpeg
340x_lg-ces-tv.jpeg

Think what you will about 3D but I'm convinced that it's almost here in a big way. It's just getting so much easier and cheaper to shoot it and new delivery options are opening up every day. For instance -

Stereoscopic 3D support in YouTube >>>

Soon I predict passive polarization will just be automatically built into almost all LCD and LED displays. Viewing 3D content will be as easy as a menu option on your screen. And getting content onto that screen will be as easy viewing video on the web is now. 

A few thoughts - GoPro Cineform, Dashwood 3D, LUT Buddy

A few thoughts - GoPro Cineform, Dashwood 3D, LUT Buddy

Hope everyone had a good Rapture. I know I did and I'm looking forward to the next one. 

I'll be down the work rabbit hole for the next few months so updates to this site will continue to be sporadic at best. But today's a lazy Sunday and I've been playing around with some new toys and had a few thoughts I'll throw at ya. 

3D GoPro -

go-pro-3d-expansion-kit-icon.jpeg

I took one of these to the Big Apple Circus in Queens yesterday to do some tests for an upcoming stereo shoot. This camera could really benefit from a viewfinder! I shot and had no idea what I was getting until I got home. I must say, I've seen some incredible footage from the GoPro but the images I got at the Circus were unfortunately not incredible at all. More like terrible. The GoPro reel is awesome but all those shots have one thing in common - day exteriors. Sadly the little camera that could just wasn't able to resolve the extreme contrast under the Big Tent and the reuslt is shadows that are overly gained up and highlights that are completely void of picture information. 

gopro3d.jpg

The good news is that it's incredibly easy to import, conform, and adjust the 3D media in GoPro Cineform Studio. 

cineformstudio.jpg

There are even a few nice tutorials on GoPro's site that show you how to do it.

The 3D is also very good. The 1 1/4" Inter-ocular distance between the 2 lenses is just right for most shooting scenarios. A little flexibility would be nice but for the price and size, I'm not complaining. 

The active metadata system the software employs is very smart and makes on-the-fly stereo adjustments fast and painless right as you're editing in Final Cut Pro or even as you're viewing in Quicktime.

The second icon from the left is the Cineform Stereo Mode Status Node. Once you've converted your camera media into Cineform paired stereo clips, this button let's you select how you want to view the 3D at any time - Side by Side for real-time monitoring on a 3DTV and all flavors of Anaglyph for web delivery. 

stereomode.jpg

After converting the Left and Right eye video into merged Cineform 3D files, I used the Stereo Mode button to tell Final Cut Pro to display the media in Side by Side mode. I hooked up my Matrox MXO2 to my 3DTV and voila, real time stereo playback with no rendering. Awesome. The 3D effect the GoPro creates is really nice. The small I/O makes Background Parallax problems much less extreme than on other fixed lens cameras such as the Panasonic 3D-A1. 

You can even export the various 3D formats - Side by Side, Anaglyph, Interleaved, etc. right from Cineform Studio for project finishing. I only wish that it worked with codecs other than the 3D GoPro's but I suppose that's what $299 Cineform Neo is for. Last month it was 500 bucks and called Neo HD, now it's 300 with the "HD" gone from the name. I was just about to pull the trigger on that software and now I'm really glad I didn't. This just seems to be the way these smallish software companies operate - they're constantly re-pricing and re-structuring their product lines, meanwhile the professionals who rely on these tools are just supposed to eat it when the price is cut in half without warning. It really ticks me off and I'm done buying software for awhile especially since who knows what GoPro has in mind for the pay versions of Cineform. If you recently paid $3000 for a Neo 3D license, I'm sure you're thrilled that they've suddenly shaved 2000 bucks off the price. I finally spent 1000 on a legit license for Final Cut Pro and 2 months later they announced FCX for $299. Same old story. 

Anyways, Cineform Neo works very much like GoPro Cineform Studio but can import and convert just about any flavor of video while keeping the stereo controls completely malleable through their ingenious active metadata system. It also gives Final Cut and Compressor access to the Cineform codecs which alone is worth the price of admission. It's probably worth 300 bucks but who knows, in a few months it may be 100 so I'm in no hurry to click that "Buy Now" button. 

Dashwood 3D makes a $100 plugin called Stereo Toolbox Lite which lets you mate left and right eye clips and do stereo adjustments right in Final Cut Pro. It's pretty nice and easy to use but has one massive disadvantage compared to Cineform - the effect has to be rendered to be played. Unlike the converted Cineform media which plays in stereo in real time and can be adjusted at any time without re-rendering, to even play back Dashwood 3D the timeline must be rendered. If you have a fast system, this is no problem but if you do render and find you need to adjust the convergence, that means a re-render. I like the Dashwood plugin and think it's a useful tool but it's somewhat limited in comparison to the similarly priced Cineform Neo. The full version of Stereo Toolbox is $1500 and I'm in such a bad mood about software pricing that I'm not even interested in what it does that the LE verison doesn't. 

Here's another great plugin for FCP that's now a lot more handy in this new Alexa Log C world we're living in -

Red Giant's LUT Buddy. Useful and free. How bout that.

For a free plugin, it's awesome. I've found that it doesn't like some LUT's that are technically in the specs that should be able to import. Still trying to figure that out. .mga from Apple Color are no problem which is nice functionality. Beggars can't be choosers but the whole point of Look-up Tables is to be non-destructive and unforunately, once again, the effect must be rendered in order to be viewed. It would be great to be able to apply a LUT to the canvas window and video output without rendering but at any rate, the software provides some badly needed LUT support in FCP. It could be useful for dailies generation but there are so many other faster, more efficient ways of doing that I'm not a big fan of the FCP route. If only you could import a LUT into compressor.. 

There's some great info on the software and LUT's in general on their site, re-posted here -

What is LUT Buddy?

Magic Bullet LUT Buddy is a professional tool that generates Look Up Table (LUT) data within the host apps After Effects, Premiere Pro, Final Cut Pro and Motion. In a simple operation, the LUT can load into almost any color correction system, from Apple Color to FilmLight Baselight to DaVinci Resolve. The LUT can also render in hardware devices like Panasonic or Sony displays for previewing a color correction on the device. Because it lets all artists share the same color correction data, LUT Buddy truly bridges the gap from desktop apps to high end systems and hardware boxes. 

How do I know if I need LUT Buddy?

Folks who need LUT Buddy will instantly recognize its value since we designed LUT Buddy for the professional colorist and compositor. LUTs are part of a cool but somewhat esoteric professional workflow for sharing color correction operations across different users. Because LUTs take some explanation, you should read the documentation and decide whether this product is for you. The product is simple to use but you have to understand what to do with the color data it generates. 

What is CDL?

CDL stands for Color Decision List, a universal color grading standard from the ASC (American Society of Cinematographers) that anyone can follow and implement. It provides a format for the exchange of basic primary color grading information. Think of a CDL as the PDF of the color correction world. CDL’s let you enter and edit color values in an application and share them across other compliant software. Colorista Free exposes its CDL data and makes that list available for an open, easy workflow that conforms to professional colorist standards.

What is a LUT?

A LUT or ‘Look Up Table’ simulates what color output will look like. It is mainly used for communicating between color systems that don't have another way of talking. LUT Buddy supports a wide variety of LUTs, such as a 3D LUT which captures sophisticated operations like Lift/Gamma/Gain, Hue offset, Saturation change and color selection. Let’s say you use our Colorista II plug-in to make color adjustments in Final Cut Pro and you want to move those color decisions to an Autodesk Smoke system. LUT Buddy can capture that information as a .3DL file, and Smoke can open that file to recreate the color changes with very close fidelity. This sharing of information is what LUTs are all about.

When should I use LUT Buddy?

LUT Buddy is a very specialized product that lets you capture and communicate more complicated color correction operations that possible with the CDL-compliant correction in Colorista Free. LUT Buddy is designed to provide an easy method to communicate color correction operation between editors, compositors and colorists using different software applications or devices.

How does LUT Buddy work?

LUT Buddy works on a simple idea—you can capture color correction operations by measuring the changes in a set of known pixel values. LUT Buddy draws a complete set of color tables on an image before any color operations are executed. The user does his color correction on the image with the tables. Then LUT Buddy reads the changes to the color tables and extract the difference as a Look Up Table or ‘LUT’. A number of industry-standard export format are supported, including common 3D LUTs for Nucoda, Autodesk, Apple Color, Iridas, and Tweak RV formats.

Can I use LUT Buddy to extract and share the color presets from Magic Bullet Looks?

No. LUT Buddy does not work on color operations which create changes across multiple pixels, and that’s what Magic Bullet Looks is all about. While LUT Buddy does capture some of these operations, such as 3 Way correction or Secondary correction, using Looks controls like Diffusion, Vignette and Streaks will utterly confuse LUT Buddy. The undesirable result will be that the output from LUT Buddy will look nothing like the presets in Looks. We are giving away LUT Buddy to help filmmakers, not to confuse them.

Can LUT Buddy capture color changes in Colorista II or other color correctors?

Yes, up to a point. Operations such as 3 Way correction, Saturation, Hue, Secondaries, and Curves can all be replicated in a 3D LUT created by LUT Buddy. Other color tools that do similar operations, such as Final Cut Pro’s Color Corrector 3-Way, can also be captured with LUT Buddy. However, operations that only change a part of image, like the Power Masks or Pop slider in our Colorista II, will break LUT Buddy. The output will either be garbage or will look nothing like your intended changes.

Good night!