The previous post on openMVG used a simple example that was only really 2.5D rather than full 3D. Although it was possible to see the mask in 3D there was only a limited amount of rotation and manipulation available. A complete 3D view allows an object to be rotated a full 360° in all dimensions.
Creating a full 3D model brings some new problems and as usual plenty of advice ( of varying quality 🙂 ) is available on the internet. It’s very hard to list hard and fast rules so the comments below are partly from the online article on tested.com – The Art of Photogrammetry: How To Take Your Photos – and partly from my experience.
The following techniques generally work well for an object like a stone which is mainly convex with a few surface features. Different techniques are needed for objects with large hollows or occlusions. These will be explored in future posts. So far I’ve only attempted models where the object is stationary and photographer moves around it. I’ve not yet tried an object on a turntable with a stationary camera.
The lighting needs to be flat and even without deep shadows across the object. A cloudy sky outdoors is ideal. Using a camera flash is a bad idea as it casts too many shadows.
You can use any camera for photogrammetry, you definitely don’t need a top of the range dSLR. More pixels are obviously better but anything over 5 megapixels is fine, my 7 megapixel Canon IXUS 70 point and shoot gives reasonable results. IMHO better coverage of the object is more important than sheer number of megapixels. I’ve read a few articles ( 1, 2 ) that advocate keeping the same focal length for each photo but I’m not sure why because openMVG will deal with different focal lengths and indeed different cameras if necessary. Extreme wide angles or fish eye lenses are to be avoided though as they create too much distortion.
Number of images
Unfortunately the question “How many images are needed?” can only be answered with “It depends”. However my experience, and the advice from the tested.com article above, is that photos every 10° – 15° are needed, i.e. 24 – 36 images per rotation. In addition, I’ve found it necessary to make multiple passes round the object at 1m height intervals to get sufficient coverage with extra images of the top if necessary, especially if it’s flat:
Sometimes the top of the object is too high to get a clear image. In those cases some sort of extension pole is useful, I have a Leki walking pole whose top unscrews to leave a ¼” thread suitable for a camera. I often use it as a monopod when I’m using my 55-300mm lens at its longer end but it could also be used to raise the camera height. Some form of remote shutter release and some way of aiming it would also be needed.
There are examples of a more professional looking pole being used as part of the Archaeology Community Co-Production of Research Data project. ( Links 1, 2 – scroll down to see images of the pole in use ). Interestingly that site uses a PDF file to store a 3D image that can be rotated and zoomed. I had no idea that PDF had that capability, but I suspect that it’s an Adobe Acrobat only feature.
One problem in doing 360° captures is that some of the unwanted background can be resolved and can appear as 3D objects in the final scene. There are two ways of dealing with this:
- Edit the the 3D model after it’s been created to remove these unwanted parts. Meshlab has the features for editing models.
- Add a mask, either globally or to each image, which tells openMVG which parts of the photo to ignore. See the openMVG documentation for details of how this works.
Each method has its advantages and disadvantages. The masks are extra work to set up but can work very well where a lot of the background is captured. For the example below only a small amount of background was present so I tidied up the 3D model in Meshlab.
Thin objects and crossing objects.
Plants, especially grass or trees, can be troublesome because the 3D reconstruction algorithm has problems in tracking all of the different objects. An example is the tuft of reeds growing against this stone, the 3D model has a hole in it where no reconstruction was possible. ( I’ve turned off the texture on the 3D model to make it easier to see the hole )
The command line parameters of the various openMVG and MVE tools allow quite a wide range of different options. ( Some of these options are controllable from the scripts that I described in a previous post ). I generally leave things at the default settings except for:
The openMVG_main_ComputeFeatures tool has a [-p|–describerPreset] option which controls the level of detail generated in the sparse reconstruction. The default is NORMAL which is usually fine when the image coverage is sufficient and occasionally I have used HIGH when the coverage is a bit lacking. There is also ULTRA – which comes with a warning !!Can be time consuming!! – but I’ve never found this level to be necessary.
The dmrecon tool has a -f, –filter-width=ARG option which controls the level of detail generated. It defaults to 5 but I usually use 7. It’s possible to go up to 11 if the image coverage in not very good but it does take somewhat longer. Again if the image coverage is sufficient then 7 is fine.
The meshclean tool has a -c, –component-size=ARG option which specifies the minimum number of verticies for any single object in the reconstructed scene. If the object has less than this number of verticies then it’s discarded. The default is 1000 which is usually fine but I have increased it to 20000 on occasions to remove unwanted bits of background.
The test subject for this post was one of the standing stones erected for the 1962 Welsh National Eisteddfod in Parc Howard in Llanelli. More details of the site itself can be found here – http://www.megalithic.co.uk/article.php?sid=36840
The stone used is the left most in the picture, highlighted by the red box
I used my Canon IXUS 70 point and shoot camera and took 48 images in two circles around the stone. The reconstruction settings were left at the default ( see above )and I had to do a small bit of editing of the dense reconstruction in Meshlab to remove some of the unwanted background.
The left image below shows the camera positions and the sparse reconstruction. The right image is the geometric_matches.svg file generated by openMVG which shows the matches between the various images, the more connections the better. ( Click on either image to expand )
The reconstructed image can be seen on the photogrammetry page of my demo site.