The main job of a retoucher is to fix things that are broken in an image without leaving a noticeable trace behind. Most of the time this job is done using Photoshop’s oldest and best known tools: healing brush and clone stamp.
However, there are situations when we prefer to start from scratch instead of fixing something that’s broken. There may be multiple reasons behind this decision. Maybe the broken area that needs to be fixed is way too broken to be fixed. Or too big. Or maybe the process of fixing it is gonna be way more time consuming than replacing it altogether.
The solution in such circumstances may lay in some compositing work. Usually this means replacing a bad area from the main shot with a good area from a second shot. But what if the second shot is just as bad as the main shot? Or what if there’s no second shot at all?
These are the precise moments where a 3D software can save the day. Depending on the complexity of the scene, recreating some parts of it in 3D (with the purpose of compositing them in the main shot later) can solve problems that otherwise cannot be solved. All of this sounds vague and we’re starting to get lost between too may words, so let’s take a look at a real example from a job I had a few months ago.
Notice how the sand looks like in the left-side image and how it changed in the right-side image. How would you fix that? You cannot clone and heal that out. You cannot take the sand from another shot, because the desert doesn’t change from a frame to another.
It may be hard to believe, but in order to fix that area I recreated the lower part of the scene in 3D, using similar camera angles and lights that match the real life conditions. The new sand that you’re seeing in the lower part of the image is 100% 3D generated.
I used Blender for this job, but that’s almost irrelevant – besides their distinctive workflows, all of the top 3D softwares that aim for photorealism can create similar results.
When something is described as “photorealistic”, that refers to an image or a video that looks like it was produced with a camera in real life, even though it was not. Or, as some say, photorealism is when you blur the lines between computer generated images and reality.
Photorealistic renderings have always been a goal, but such thing wasn’t possible until recently because in the same way as older cameras take “bad” pictures (bad meaning lacking that modern, high megapixel/definition, high dynamic range flavor), old 3D softwares were not capable of reproducing the light in such a realistic way the new 3D softwares are capable of.
If you think that the ability to perform changes like the one illustrated above (replacing the sand in a desert) is something too small or maybe too simple to be called a 3D revolution, let’s take a look at a more complex scene.
In the following example I tried to recreate in 3D a shot that was taken by Xavi Gordo in April 2017 at an amazing location called “La Muralla Roja” (“The Red Wall”) in Calpe, Spain. I kept the man, the plants, and the background from the original shot; everything else (architecture, chairs, tables) are modeled and textured in 3D.
At first sight the shots above may look identical, but try to take a closer look, because they aren’t. One shot served as a reference on which the other one was created.
Have you guessed it right? The left-side image is 100% reality and the right-side image is 50% reality and 50% 3D. If you want to check out the exact differences between the original version and the 3d version, here is an overlayered gif illustration that I’ve made for this purpose.
It’s not a perfect match, mainly because I didn’t have any blueprints for this shot (I created the architecture and everything else only looking at the reference photo), but I still think that it’s pretty hard to tell which one is real and which one is not. For an untrained eye, it may be impossible.
The illustrations above show how close to reality a 3D render can come. Besides the man, the plants and the background, everything you see in the right-side image was 3D generated. This means: the architecture and furniture, all the textures and materials, the scratches on all surfaces, the rust on the edges, the dirt on the crevasses, the lights, shadows, and the entire atmosphere.
If you’ve worked only as a retoucher so far, the 3D workflow may seem a little counterintuitive at first. The main difference is that in retouching you start with an image that has flaws and try to remove as many of them as possible, while when creating a 3D environment you start with a perfect scene and try to artificially add credible flaws.
Talking about the lights, in order to generate 3D renders that look as realistic as possible, it’s highly recommended to use HDRi files in your workflow, which is exactly what I did in this example. The term HDRi comes from “High Dynamic Range Imaging” and, in simple terms, these are 360 panoramas of real life environments which will give you very accurate lights and cast real looking shadows. Also, when using HDRi instead of the old fashion standard lamps, besides the lights, you’ll also get authentic reflections across your scene.
I sincerely believe that in the following years, the 3D and computer generated images are gonna be mixed more and more with the real scenes captured by photographers with their cameras. It happened in the past as well, but due to the serious limitations of the 3D softwares, the human eye was always able to tell what’s not real.
That’s not the case any longer and from this point the only way is up. At the end of the day this could serve as a new method of controlling in post-production the things that cannot be controlled on set and a future with much more engaging images.
YOURS TRULY
hi@andrei-ivascu.ro
(+40) 767 642 657
SEND A MESSAGE
YOURS TRULY
hi@andrei-ivascu.ro
(+40) 767 642 657
SEND A MESSAGE