I’ve been messing around with 360° immersive video at work. One of the best ways to quickly get familiar with the technology is to use it in a difficult circumstance so you can find its limitations. At work we’re building immersive video to show a virtual walk-through of our school. If the gimbal and camera we have will work on a motorbike, it’ll work stuck to a kid’s head as they walk through the school.
There are a number of barriers to admission with 4k video and image stabilization. Fortunately, the 360Fly4k windshield mount I have is so over engineered that it easily handles the weight and motion of the gimbal and camera rig.
I’ve previously done 4k video with the 360Fly4k, but it has a big blind spot on it, so this would be my first true 360 4k video. The Fly is a tough thing that takes great footage, but I’d describe it more as a 300° camera than a true 360 one.
This 4k 360 camera is the Samsung Gear 360. I’m running it off the camera because the app won’t run on my Android non-Samsung phone because I guess Samsung don’t want to sell many of these cameras – it’s kind of a jerk move on their part so if these things don’t sell (because you have to have a Samsung phone to access it remotely), then they’re getting what they deserve.
The Gear 360 has a small screen so you can see settings and using the buttons is fairly straightforward, though you’ll find yourself constantly accidentally pressing buttons while you’re handling it. The Ricoh Theta 360 is still my ergonomic favourite in terms of control and handling, and they just came out with a 4k version of the Theta – perhaps they’ll lend me one to test.
The gimbal is a Moza Guru 360°Camera stabilizer. The typical gimbal design has weights to the left or right of the camera to keep things balanced, but on a 360 camera that means you’re blocking all sorts of sight lines. The Moza gimbal is vertically stacked with the weights hanging below, mostly out of sight. It has a power button and a push button joystick that lets you set shooting modes and centre your camera so it’s looking where you’re going rather that looking down the ‘seams’ between the two cameras.
Most 360 cameras are actually two or more cameras working together. The resulting footage is then stitched together in software to make an every direction video. The raw footage from the Samsung looks like this (on left). A front and back facing fish-eye camera capturing separate footage.
Because both cameras are capturing different scenes, you can often see where they are stitched together because of a difference in ISO which shows up as a clear line of brightness difference (on the right). They all tend to be identical, fixed-lens cameras, so the aperture and shutter speed tend to be identical.
The first test video has the Samsung camera set at highest resolution (4096×2048 pixels in video) and 24FPS. The gimbal is in locked mode, so it’s always looking in the same direction even if I go around the corner. The gimbal provides smooth video by taking the bike’s motion out of the video (it’s always looking in the same direction as the bike and I rotate around the shot), but a bike’s motion is one of the best parts of riding, so for the second shot I set it in tracking mode so it followed the bike’s motions.
Uploading it to YouTube out of the Gear 360 Action Director resulted in a flattened video that doesn’t allow you to pan. In order to produce that kind of video in the G360-AD (what a ridiculous name), you need to PRODUCE the video in the software and then share it to YouTube from within the program. My issue with this is that when you bring the program in it takes an Intel i7 VR ready laptop the better part of twenty minutes (for less than ten minutes of footage) to process it before you can do anything with it. When you produce it (again) for YouTube you end up waiting another twenty minutes. The Ricoh Theta saves the video (albeit 1080p equivalent) in a fraction of the time and the resulting saved version is 360 ready for YouTube; the 360Fly software is likewise efficient at 4k. I’m not sure why I have to wait forty minutes to produce less than ten minutes of footage on the Samsung. I know it’s a lot of data to work through, but it isn’t a very streamlined process.
So, after a lot of post processing, the 4096×2048 360° video out of the camera shows up on YouTube at 1440s (s stands for spherical rather than p – pixels – spherical footage is stretched across a wider area and tends to look less sharp). I’m not sure where my 2048s footage went – I imagine part of that big post processing was to shrink the footage to fit on YouTube more easily?
If you click on the YouTube logo you can watch it in YouTube and adjust the resolution (bottom right) to see how it looks (make sure to do it full screen to use all your pixels). If you’re lucky enough to be watching it on a 4k display, this will come close to filling it.
The quality is excellent, the microphone remarkably good (they get beaten up pretty badly on motorcycles), but the awkwardness of post processing and the ergonomics of the thing don’t make it my first choice. Trying to manage it with gloves on would be even more frustrating. What you’ve got here is a good piece of hardware let down by some weak product design and software.
The software does offer some interesting post processing options in terms of wacky arts filters, but if you’re shooting at 4k all this does is drastically reduce the quality of your video. If you’re going to use those filters film at way lower resolution so you don’t have to wait for hours while they process.
I’m aiming to go for a ride tomorrow to look at the fall colours after our first frost. I’ll bring the Samsung along and see how well it photographs. It’s promising 15 megapixel 360 images and high dynamic range landscapes, so I’m optimistic. Photography is timeless and my preferred visual medium anyway, I find video too trapped by the continuity of time. Maybe the Samsung will be a good photography tool. Of course, I won’t be able to fire the thing remotely because I don’t have a Samsung phone…
from Blogger http://ift.tt/2yQBYyg