The process is very similar to photogrammetry, but uses an additional step to calculate camera position from the photographs, so there is no need to know the camera position, and it can even be done from pre-existing photographs provided the coverage is great enough.
Feature matching software matches points across a sequence of images, and uses triangulation to determine the exact location of the camera for each image. This information is used to calculate the position of features in 3D space. The resulting pointcloud can be processed to create a 3D model or elevation map. This technique allows the recording of surface detail at a precision, cost and speed that can compare favourably with topographic survey, LiDAR and laser scanning, and photogrammetry.
This research builds upon my past professional experience in 3d modelling and on a successful pilot study I
developed for my Masters dissertation. Only open source software will be used, and this will be developed to bring the technique within the reach of the average archaeologist and to integrate technologies specific to archaeological needs, such as the ability to georeference 3D objects.
The research focuses on a particular area of archaeological study, looking at a sequence of contemporary prehistoric
monuments in the UK. It considers the applications of SFM as a prospection tool to further our archaeological understanding. These include the investigation of earthworks through capture of images from a kite/pole/UAV, the investigation of structure through the capture of the interior and exterior of a feature, and the virtual recreation of destroyed archaeology through the use of heritage images.
The results of this research will be used to demonstrate the potential use of SfM as a cheap and fast tool for recording information and for the dissemination of this in the form of 3D models that are easily understood by the public.
The software I am using is Bundler
This will take a collection of photographs, and use distinctive points in the images to calculate the position of the camera in space. The output is a sparse pointcloud of the linked points, and the camera positions. The software is open source, and I run it using a variety of borrowed and adapted scripts.
Bundler is the principle behind photosynth, which is published by Microsoft. Photosynth is very fast, and is usually my first step when processing an image set as it gives a good idea of the quality of results
However in order to create a dense pointcloud bundler is needed, as the files created by Bundler allow me to run PMVS2 and CMVS, which uses the bundler camera positions to find the positions of many more points.
(PMVS2 can be run on the output of photosynth, but as the CMVS optimisation cannot be used, it is impractically slow for large datasets)
The results are always correctly aligned in relative space, but they would need to be georeferenced to worldspace, for which I need markers in the photos. I have used large coloured circles in some of my projects, and georeferenced these using a total station.