the technology behind the magic
"Photogrammetry" is the science that allows to make 3D measurements from an input of 2D images.
The diagram shown above (pulled from clemson.edu) beautifully explains the basics of this process: by analyzing identical feature points in different photographs, it is possible to automatically extract a 3D model.
But how does it work? Most photogrammetry software often describe themselves as magic, as they can turn automatically 2D data into 3D. In this page, you will find more technical details on how photogrammetry really works: it's science, not magic after all!
The input of any photogrammetry software are photographs. All photographs must be taken with a physical device, a camera. All physical devices have aberrations and distortions, thus, by calibrating the device, it is possible to correct those aberrations.
The camera calibration process can be done manually, although the most avanced photogrammetry software can do it in a 100% automated way as a part of their Structure from Motion algorithm.
Structure from Motion allows to run a geometric reconstruction algorithm, by first matching identical feature points (pattern of pixels that a computer can recognize) in two or more images. Combining this step with the camera calibration step, it is possible to obtain undistorted images and the correct placement of the photographs in a 3D space. The output of this phase is a sparse point cloud and the camera position information (represented by those tiny blue pyramids).
Like a waterfall, the input of the multi view stero phase is the output of the structure from motion phase. Since the camera parameters are now known in this phase, a pixel in an image generates a 3D optic ray that passes through the pixel and the camera center of the image. The corresponding pixel on another image can only lie on the projection of that optic ray into the second image.By repeating this process on all pixels of every image, it is possible to generate depth maps, that leads to a dense point cloud.
Once again, the output of the previous phase is fed to the reconstruction step. In this case, various approaches allow to reconstruct a mesh: some of them allow for better smoothed surfaces (like Poisson) or for better hard edges rich surfaces (like Sasha). These algorithm should be selected by the user accordingly to the subject that is going to be reconstructed: for example Poisson is more suitable for a human body while Sasha is more suitable for an urban setting.
At the end of the previous step, the color from the photographs are simply bound to each vertex of every triangle. In order to obtain a photorealistic 3D model, it is mandatory to paint each triangle with the correct color pixel information from the input photographs. Most photogrammetry software have their own proprietary texture generation and color balancing algorithms.
Once the photogrammetry pipeline has been completed, it is possible to use the 3D information for all kinds of measurements and data post processing. Below, a partial list of the many derived output photogrammetry can yield:
There are many photogrammetry software suites, each with its strenghts and weaknesses. Below, a list of the most known photogrammetry solutions which feature a full photogrammetry pipeline:
You can find below some interesting pages you should read: