Jump to content

- - - - -

Dome ports and underwater 3D

  • Please log in to reply
1 reply to this topic

#1 echeng


    The Blue

  • Admin
  • 5845 posts
  • Gender:Male
  • Location:San Francisco, CA
  • Interests:photography, ice cream, cello, chamber music, quadcopters

Posted 28 July 2010 - 08:10 PM

Regarding domes and 3D:

I'd be interested in seeing dual-dome 3D on a big screen. Everything looks "3D" when viewed at YouTube sizes, and it's even easier when viewed cross-view because the images don't have to be properly horizontally aligned.

But ... cameras focus on a curved virtual image when you shoot out of a dome port. So you're reproducing 3D... but the 3D of the virtual images, not the actual 3D. It may be different with fisheye lenses because they probably focus on a curved surface (am I wrong on this?).

Howard Hall had a huge dome made for his 3D IMAX system. They shot some test footage and projected it in a theater. What they got was 3D... but the 3D of a curved, bizarre surface that was not very far away from the viewer (consistent with focus on a virtual surface).

If you're going this for small viewing size, it isn't going to matter. 3D is weird at small sizes, anyway -- you can diverge the image a huge % of the screen and be fine. But large® screen 3D requires a lot more effort.

What does "actual" 3D mean, anyway? I dunno, and it probably depends on your target audience. But there would definitely be a huge difference -- and not for the better -- which is why I'm shooting through a flat port.
eric cheng
publisher/editor, wetpixel
www | journal | photos

#2 HDVdiver


    Eagle Ray

  • Member
  • PipPipPip
  • 327 posts
  • Gender:Male
  • Location:Adelaide, Australia...Great White Shark country.
  • Interests:Dive,dive, dive...

Posted 28 July 2010 - 11:21 PM

I am also very curious to see how successful stereoscopic 3D will be shooting through two domes. I don't think it would work with a single large dome because each camera would be out of concentric alignment with the dome's nodal point. I wouldn't even be good for a single camera system, let alone blending a L & R video stream.

On the other hand I can't see why it shouldn't work if each camera is shooting through its own dedicated dome. The fact that its recording a virtual image shouldn't matter (hopefully) since each camera's virtual image should still blend during editing, and depth information should be there due to the fact that the camera isn't simply recording one virtual image...but a layer of virtual images corresponding to the distance of the subject from the dome.

The only uncertainty as I see it is the fact that the virtual image curvature also varies according to subject distance for the dome. Since the dome is spherically symmetric, every object at infinity will produce a virtual image at the same distance from the dome. Hence, 'infinity' is mapped onto a sphere that is concentric with the dome but has a larger radius. Objects that are closer than infinity produce virtual images that are more flattened, but still wrapped onto a curved surface.

My quess is that it should still be OK for 3D since all variables are the same for both cameras except the simulated interocular distance. Not sure what would happen when you throw convergence into the arrangement.

I've just received a couple optical glass dome blanks from Germany. When I get the time I'll be modding our prototype 3D housing from flat port to twin domes and see how it works in reality.