Full 3D Models With openMVG/MVE

The previous post on openMVG used a simple example that was only really 2.5D rather than full 3D. Although it was possible to see the mask in 3D there was only a limited amount of rotation and manipulation available. A complete 3D view allows an object to be rotated a full 360° in all dimensions.

Creating a full 3D model brings some new problems and as usual plenty of advice ( of varying quality 🙂 ) is available on the internet. It’s very hard to list hard and fast rules so the comments below are partly from the online article on tested.comThe Art of Photogrammetry: How To Take Your Photos – and partly from my experience.

The following techniques generally work well for an object like a stone which is mainly convex with a few surface features. Different techniques are needed for objects with large hollows or occlusions. These will be explored in future posts. So far I’ve only attempted models where the object is stationary and photographer moves around it. I’ve not yet tried an object on a turntable with a stationary camera.

Image Considerations

Light

The lighting needs to be flat and even without deep shadows across the object. A cloudy sky outdoors is ideal. Using a camera flash is a bad idea as it casts too many shadows.

Camera

You can use any camera for photogrammetry, you definitely don’t need a top of the range dSLR. More pixels are obviously better but anything over 5 megapixels is fine, my 7 megapixel Canon IXUS 70 point and shoot gives reasonable results. IMHO better coverage of the object is more important than sheer number of megapixels. I’ve read a few articles ( 1, 2 ) that advocate keeping the same focal length for each photo but I’m not sure why because openMVG will deal with different focal lengths and indeed different cameras if necessary. Extreme wide angles or fish eye lenses are to be avoided though as they create too much distortion.

Number of images

Unfortunately the question “How many images are needed?” can only be answered with “It depends”. However my experience, and the advice from the tested.com article above, is that photos every 10° – 15° are needed, i.e. 24 – 36 images per rotation. In addition, I’ve found it necessary to make multiple passes round the object at 1m height intervals to get sufficient coverage with extra images of the top if necessary, especially if it’s flat:

 

Sometimes the top of the object is too high to get a clear image. In those cases some sort of extension pole is useful, I have a Leki walking pole whose top unscrews to leave a ÂĽ” thread suitable for a camera. I often use it as a monopod when I’m using my 55-300mm lens at its longer end but it could also be used to raise the camera height. Some form of remote shutter release and some way of aiming it would also be needed.

There are examples of a more professional looking pole being used as part of the Archaeology Community Co-Production of Research Data project. ( Links 1, 2 – scroll down to see images of the pole in use ). Interestingly that site uses a PDF file to store a 3D image that can be rotated and zoomed. I had no idea that PDF had that capability, but I suspect that it’s an Adobe Acrobat only feature.

Masking

One problem in doing 360° captures is that some of the unwanted background can be resolved and can appear as 3D objects in the final scene. There are two ways of dealing with this:

  1. Edit the the 3D model after it’s been created to remove these unwanted parts. Meshlab has the features for editing models.
  2. Add a mask, either globally or to each image, which tells openMVG which parts of the photo to ignore. See the openMVG documentation for details of how this works.

Each method has its advantages and disadvantages. The masks are extra work to set up but can work very well where a lot of the background is captured. For the example below only a small amount of background was present so I tidied up the 3D model in Meshlab.

Thin objects and crossing objects.

Plants, especially grass or trees, can be troublesome because the 3D reconstruction algorithm has problems in tracking all of the different objects. An example is the tuft of reeds growing against this stone, the 3D model has a hole in it where no reconstruction was possible. ( I’ve turned off the texture on the 3D model to make it easier to see the hole )

Software settings

The command line parameters of the various openMVG and MVE tools allow quite a wide range of different options. ( Some of these options are controllable from the scripts that I described in a previous post ). I generally leave things at the default settings except for:

openMVG

The openMVG_main_ComputeFeatures tool  has a [-p|–describerPreset] option which controls the level of detail generated in the sparse reconstruction. The default is NORMAL which is usually fine when the image coverage is sufficient and occasionally I have used HIGH when the coverage is a bit lacking. There is also ULTRA – which comes with a warning !!Can be time consuming!! – but I’ve never found this level to be necessary.

MVE

The dmrecon tool has a -f, –filter-width=ARG option which controls the level of detail generated. It defaults to 5 but I usually use 7. It’s possible to go up to 11 if the image coverage in not very good but it does take somewhat longer. Again if the image coverage is sufficient then 7 is fine.

The meshclean tool has a -c, –component-size=ARG option which specifies the minimum number of verticies for any single object in the reconstructed scene. If the object has less than this number of verticies then it’s discarded. The default is 1000 which is usually fine but I have increased it to 20000 on occasions to remove unwanted bits of background.

Results

The test subject for this post was one of the standing stones erected for the 1962 Welsh National Eisteddfod in Parc Howard in Llanelli. More details of the site itself can be found here – http://www.megalithic.co.uk/article.php?sid=36840

The stone used is the left most in the picture, highlighted by the red box

I used my Canon IXUS 70 point and shoot camera and took 48 images in two circles around the stone. The reconstruction settings were left at the default ( see above )and I had to do a small bit of editing of the dense reconstruction in Meshlab to remove some of the unwanted background.

The left image below shows the camera positions and the sparse reconstruction. The right image is the geometric_matches.svg file generated by openMVG which shows the matches between the various images, the more connections the better. ( Click on either image to expand )

The reconstructed image can be seen on the photogrammetry page of my demo site.

The viewer is the 3D Heritage Online Presenter ( 3DHOP ) – the installation of which was described in a previous post. I’ve included both the uncompressed and compressed versions for comparison.

 

Posted in Photogrammetry | Leave a comment

Google App Engine for General Web Hosting

I use a free wordpress.com plan for this blog and IMHO it’s a great service. Unfortunately neither this plan nor any of the paid-for versions will allow me to run random bits of Javascript to demonstrate things like a 3D viewer in the browser window to view photogrammetry models. There’s inevitably a suitable WordPress plug-in but part of this exercise is for me to learn about web technologies and I’m worried that a plug-in would obscure some of the details. I’d also like to be able to switch between software packages if necessary which could be difficult with a plug-in.

I could self-host a WordPress installation and do it that way but, for now at least, I’m trying to avoid doing any IT sysadmin and support and concentrate on the writing 🙂 Fortunately there are plenty of other options around and one that I’ve been wanting to try for a while is Google’s App Engine. Running a separate site is less desirable because I have to link out from this blog to a different URL but it’s the path of least resistance to getting something up and running quickly.

Google App Engine Quotas and Billing

App Engine has a free plan / trial but as is usual with Google products it’s quite hard to pin down what the free limits are and what happens if you exceed them. There’s an FAQ discussing a “Free Trial” and “Always Free” plans but these seem to assume that you have an account with Google which I do not, although I do have a Gmail account which is needed to get started. Anyway, after setting up a demo site I had the following information on my App Engine console page:

I deduce from this that I get 1 Gbyte of bandwidth per day and, if I dig into the “Settings” link, 5 Gbyte of cloud storage for free after which some sort of error page will be served. We shall see 🙂

Google App Engine Setup

Setting up a static web page was ridiculously easy, just a question of following the instructions on Google’s Hosting a static website on Google App Engine page. The URL refers to Python but there’s no Python involved, just HTML / CSS / Javascript. As far as I can see you can create as many application / sites as you like and they will have the URL format of <app name>.appspot.com. I suspect that the free bandwidth and storage allocations are aggregated across all your sites.

My demo site is at http://thereteng.appspot.com/ and was up and running quickly with no bother at all. Essentially you develop locally and deploy the site using Google’s SDK which you download and compile. I only use three of the SDK commands:

  • gcloud app deploy [ -v version no] – Deploys the site. The version number is optional and will be automatically added by the SDK if it’s not specified.
  • gcloud app browse – Opens the site in the browser. Or you can just point your browser at the site URL manually.
  • gcloud app logs tail -s default – Live view of the web logs.

Rather than develop the whole site layout from scratch I used one of the many Creative Commons licenced site template downloads available – https://www.html5webtemplates.co.uk/ in this case. It’s a simple layout and I just tweaked some of the colours and formatting.

3D Viewer

The first use for my new demo page was to display some of the PLY models created using photogrammetry in a web based 3D viewer.

The go-to technology for displaying 3D objects in the browser is undoubtedly WebGL ( Wikipedia link ) which is normally not used directly but via a higher level library. I’ve used threejs in past but this time I was looking for something with a higher level of abstraction just to display PLY models. After a bit of Googling I came across the 3D Heritage Online Presenter or 3DHOP which is ” … an open-source software package for the creation of interactive Web presentations of high-resolution 3D models, oriented to the Cultural Heritage field.

It has quite a high level of abstraction so that loading a PLY model becomes just half a dozen lines of HTML & Javascript. The documentation is very good and I was able to get it up and running very quickly. It also has a feature whereby PLY models can be pre-processed so that they can be progressively downloaded and rendered. This saves having a download progress bar showing while the whole model is downloaded.

There’s also a compression option which I haven’t investigated yet and many other display features. The pre-processor as downloaded only runs under Windows. It should be possible to compile it for Linux but I haven’t tried this yet.

Examples of my photogrammetry models can be seen on the demo site here – http://thereteng.appspot.com/photogrammetry.html

Posted in Photogrammetry, Programming | 3 Comments

Flowers in Groups

As well being photogenic as single specimens, flowers also work well in groups. For these shots I’ve tried to isolate the flowers from the background, either by reducing the depth of field or by using a backing board

This is a Paperwhite narcissus grown indoors by my wife. It flowered over one Christmas holiday. I’ve used a black background card to make the white flowers show up more clearly.

This was taken in the glasshouse at the National Botanic Garden of Wales. My notes are a bit lacking so I’m afraid that I don’t know the genus. I love the contrast between the purple of the petals and the bright yellow ends of the stamens.

This is a convolvulus and regarded pretty much everywhere as a weed. It does have attractive pure white flowers but I still wouldn’t want it in my garden. I was experimenting with the my new 55-300mm telephoto lens at the time and it proved very successful in reducing the depth of field to blur the background.

These are part of a bouquet that my wife got on Mother’s Day. I used a black background card again and also tried to zoom in so that the flowers filled the whole frame. I used artificial lighting from each side to try and add some shadow details.

This is blossom on a plum tree in the garden of our old house. The background really was a cloudless blue sky but it was difficult to frame a suitable group of blossoms against the sky. I ended up standing on a step ladder, holding some other branches out of the way with one hand while taking the photo with the other.

Posted in Flowers, Photography | Leave a comment