Jump to content
Etc

Getting rid of the water in your images

Recommended Posts

More info about this will be welcome

 

Edited by Etc

Share this post


Link to post
Share on other sites

The question really seems to be: What addition information is needed to correct the image. It looks like you need an image of a colour chart as well as multiple distances and angles to correct a single image. That would be rather complicated and not really useful for most photographers, especially those who don't use 10 minutes for a single photo because they have buddies waiting.

Still, this is really interesting and I wonder how it works in detail. She says is not image manipulation to just be pleasing, but surely it needs the same tools and might have similar drawbacks. 

Share this post


Link to post
Share on other sites
5 hours ago, hyp said:

The question really seems to be: What addition information is needed to correct the image. It looks like you need an image of a colour chart as well as multiple distances and angles to correct a single image. That would be rather complicated and not really useful for most photographers, especially those who don't use 10 minutes for a single photo because they have buddies waiting.

Still, this is really interesting and I wonder how it works in detail. She says is not image manipulation to just be pleasing, but surely it needs the same tools and might have similar drawbacks. 

Totally agree, that's why I would like to know more.

Anyway I understand that all these procedures with the color chart etc is just part or the development of the software.

Share this post


Link to post
Share on other sites

The results of the technique do not look that incredible to me? Most of the close shots once white balanced properly they look pretty much like those in the video.

Maybe I am missing something but I do not see lightyears between the examples and pictures I already take today once corrected

  • Like 1

Share this post


Link to post
Share on other sites

I think the difference is the same as with many other AI-Post processing techniques. Once the technology is perfected you don't need as much skill to create quality pictures. Underwater photography requires a good deal of understanding of flash exposure and other technical details that many are too lazy to acquire. If this properly worked (right now it seems like it still requires a fair amount of effort) you could just shoot ambient light and all of the colour problems that come with it will be solved in post by a one click solution. 

This forum is small and most content posted is fairly high quality, but if you look at other sites many pictures are shockingly bad, and sometimes despite the use of very expensive setups. Setting everything to auto (and TTL) is not really an option for underwater photography, and this might make it possible.

It's similar to the new Ai sky replacement methods. At first you needed to get the timing right and be on location at the right time. Then people started replacing skies manually, but it took significant editing skills. Now you choose a sky from a gallery, click ok and you're done.

For people who don't have the skills this is amazing. For people who do, it just invalidates their knowledge. 

Share this post


Link to post
Share on other sites

1. The color chart isn't necessary. It's there to confirm the accuracy of the technique.

2. The technique as I understand it requires multiple pictures of the same object from different distances in order to reverse-engineer the water filtration factor by comparing the colors of that object (or pixel) from different distances. 

3. The advantage over a simple white balance as far as I understand is that it depth-maps all the elements in the picture in 3d space and appropriately color-corrects for all of them depending on the amount of water between that object and the camera.  So you'd see warm colors extending far into the background, not just for the foreground subject as you would get with a normal white balance off a grey card at foreground distance. 

4. For photos this process is rather cumbersome as it forces you to take multiple pictures of the same subject from different distances. So it will not provide a 1-click adjustment for photos in its current form.

5. For video however, this could be brilliant if your video clip involves movement anyway, as you can get a lot of distance information from subsequent frames of the video (the same way you can get 3d mapping from a moving video clip when doing photogametry). So potentially this could be implemented as a 1-click solution for a video file. Though it would obviously work better raw video.

Edited by dreifish
  • Like 2

Share this post


Link to post
Share on other sites

it may automate things but once you run out of red light when things get deep enough you will be back where you started I would think.  I also noted for example at 1:30 or so the scene looks great on the reef but sucks the rich blues out of the BG water it seems.

Share this post


Link to post
Share on other sites

She is using raw images so this appears a mix of white balance and dehaze
I guess the different shots get the appropriate level of correction you need
So this could work when you are at depth where colors are still there but not replace flash anyway
Blue scatters light and dehaze removes scatter but water also absorbs light both in terms of color and general intensity
Looks very promising for ambient light shots at shallow depth for those that are not that good at post processing
Still many shallow pictures once gone through current tool look pretty good and not as different as those results


Sent from my iPhone using Tapatalk

Share this post


Link to post
Share on other sites

My understanding is that this is intended to produce consistent colour - useful for scientific comparisons where different shots may have different water distance between camera and subject. I haven't seen anything suggesting it be used "recreationally"? Personally I shoot underwater because I want the photos to look like they were taken underwater...

Share this post


Link to post
Share on other sites

I'd like to have it as an option and starting point - especially in-camera if it works with video.

Share this post


Link to post
Share on other sites

The technique is a "forward" calculation that tries to use the inherent optical properties of the water to calculate what the water does to the light passing through it. It tries to estimate those optical properties by using pictures at various distances so as to "calibrate" the estimates.

Conversely, when we photographers "white balance" something, we do the "inverse" problem, not the "forward" problem. We pick something in our single image that we know to be "white" or "neutral gray" and force it to be white or gray in our image; so we are not calculating what the water does to our image, we are forcing the final result to be "correct."

In principle, if everything in our image were the same distance from the camera, we should get the same result as a correctly-done forward calculation. But our image has things at various distances from the camera, so we ought to have a white-balance methodology that is different for each part of the image...each pixel, in fact. then we would converge to the fancy forward solution, except probably better, because the inherent optical properties are not all know, they change with time and depth, and the calculations are difficult.

Share this post


Link to post
Share on other sites

Sponsors

Advertisements



×
×
  • Create New...