For a while now I’ve been processing the digital images from my camera using Free & Open Source Software ( FOSS ), mainly using the GNU Image Manipulation Program (GIMP). It’s easy to use and there’s a wealth of online documentation and tutorials.
Having started to create some time lapse videos I was again interested in a FOSS toolchain. This wasn’t quite as simple but I think that I’ve come up with some workable options1. Note that the description below doesn’t cover the mechanics of capturing the photos in the first place, there are many tutorials and explanations available online.2 My starting point is a camera full of images that need to be converted to a time lapse video. This is a sample video created using the toolchain described below, it was taken in our garden and shows a sunset over Carmarthen Bay in South Wales.
( 490 images taken at 2s intervals. There’s a window of about two months from mid-December to mid-February when the sunset is visible from this spot in the garden. )
1For an alternative view with some different options see:
2The best tutorial / explanation that I found online is “Getting Started with Timelapse Photography” by Richard Harrington It’s a long video but worth watching all the way through.
Note that my solution should apply to any Linux distribution. Some of the programs may have Windows or Mac versions.
Once the images have been captured then the process is roughly:
- Get the images off the camera and into the computer. For time lapse it’s advisable to shoot in RAW format rather than JPEG which is what this process assumes.
- Review the images to make sure that there are no problems. For example camera movement or unwanted birds flying across the frame can cause odd flickers in the final video
- Process the images, for example to correct exposure or sharpen the image etc. Essentially this is the same as you would do for a single still image
- Crop and rescale the image. A DSLR can capture at a much higher resolution than even HD video so the images will need to be reduced in size at least. The format of a DSLR image will also be different from a video, for example:
- My DSLR ( Nikon D3100 ) takes images of 4608 x 3072 pixels ( 3:2 image format )
- An HD video is 1920 x 1080 pixels ( 16:9 image format )
- Deflicker the images. The general advice when shooting images for timelapse is to use manual camera settings throughout. However it’s still possible to end up with some flickering in the final images because, even in fully manual, the camera may not expose exactly the same from one image to the next.
- Convert the images to a video. The parameters for the video ( size, aspect ratio, frame rate etc. ) will depend on the final market for the video. I usually aim for HD ( 1920 x 1080 ) and 25 frames per second.
Looking at each of these in turn:
Get the images off the camera
I’ve never bothered with any camera maker’s software to transfer images and I just use a USB card reader, available cheaply from Amazon. When plugged in to a PC it looks like a USB drive and allows access to all the photos. More modern PCs may have dedicated card readers.
Review the images
I use gnome-raw-thumbnailer as a plugin to add RAW thumbnails to the standard file viewer ( Caja in my case ). I found that applications like Shotwell work fine but seem to want to add in extra features like photo management which I just don’t need and mainly get in the way.
Process the images
Processing RAW images on Linux has been a bit hit-or-miss in the past. However things have improved over the past few years and there are now a number of potential options. I eventually settled on darktable, but this was mainly based on reviews I read online rather than carrying out any detailed analysis of my own. There are others out there and it may be worth experimenting a bit. The key feature needed for processing timelapse is the ability to batch edit images.
Darktable takes a bit of getting used to. To me its UI is not inherently intuitive in any way but it’s not too bad once you’ve got used to it. The only aspect that really annoys me still is the lack of an “undo” command. Yes you can step back through the History stack and continue editing but it’s not quite the same thing. ( And I’m not the only one complaining – https://redmine.darktable.org/issues/8498 ! )
One limitation of darktable is that it is not fully scriptable. There are some scripting commands but, as far as I can see, none of the image manipulation features are controllable. The main requirement for this feature is for processing timelapses where the light level changes considerably, e.g. a sunset. For these conditions I shoot with a fixed aperture and shutter speed and use the wide range of exposure compensation available with RAW images to correct the exposure later. As each image may need a different level of exposure compensation then it’s just too time consuming to do it manually and a script is the easiest way.
An alternative for processing RAW images that is fully scriptable is UFRaw It’s not a complete replacement for darktable unfortunately as it lacks any image sharpening features so for the moment at least I occasionally need to use both tools.
Crop and rescale the image
There are two situations to consider here:
- All the images are to be cropped and rescaled the same amount. This can easily be done as part of the darktable / UFRaw edits.
- Different crops and resizes are used on different images to simulate zoom and pan in the final movie. In this case a fully scripted batch editor is needed and I find that ImageMagick and a shell script does the job. At the moment I write custom scripts for each movie which is a bit time consuming. There’s definitely scope for some process improvement here.
Deflicker the images
The only FOSS deflickering utility that I know is this perl script that uses ImageMagick. It can be run with one or two passes through the images and works very well.
Convert the images to a video
The simplest and easiest option that I found was ffmpeg The ffmpeg options can be somewhat overwhelming but I found a good forum post that explains the basic usage. Based on that post I use variations on the following basic command which seems to work fine:
ffmpeg -framerate 25 -start_number 2544 -i DSC_%d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4
One further point to note is that ffmpeg won’t easily allow you to add a sound track, add titles, fade between multiple video segments etc. To do that you need a more fully featured video editor. There are lots of Linux video editor reviews online but most of these are a few years old are now largely irrelevant because they refer to older versions of the tools. A lot are also rather superficial but there’s a relatively recent Reddit thread which may help.
I’ve only some some brief tests but kdenlive ( https://kdenlive.org/ ) seems to work OK and certainly does what I need.
One point to note is that the versions of any of these tools that are supplied with a particular Linux distribution are often out of date and it’s always best to install from the tool’s Personal Package Archive ( PPA ) if possible to get the latest version.