But why tho

As a very amateur photographer starting to play around with various styles, one that I'd always admired and wanted to figure out was that of the timelapse. I was inspired from the /r/timelapse community, and decided to give it a shot on my own, since I figured it couldn't be that hard, right?


Starting Equipment

I had all of these at my disposal for the project:

Initial Raspberry Pi Setup

I started out with some really great tutorials on getting my Pi set up. It's really hard to overstate how much better the documentation for the pi has gotten over the last 6 years.

I threw a simple image onto a 4GB SDHC and fuckin threw that in my crumbling homemade lego case and prayed like fuck that it would actually boot. And somehow, it did.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
# wait 5 hours for my 6 year old pi to become secure-ish.

Image Capture

Thinking I'd need some fancy shit to do image capture and "processign", I followed this tutorial to set up OpenCV on my pi, and about 75% through the installation (it's an old, sad machine w/ 4GB of space), I decided that it was too much, and to just use something a bit jankier. But where would I find it?


Oh yes. Oh god yes.

Having abandoned OpenCV, I turned to the command line to install some random library. The goal was to just take an image from the available USB camera and save that to a file, and then post it to my remote server. I went with fswebcam.

Sending it from the pi to a remote server for storage

I simply wrote a cron task w/ the following script running every 2 minutes of every hour. This ended up with a lot of photos, so I could choose the granularity down the line in terms of how many fps I wanted. I thought this would be a good start...


DATE=$(date +"%Y-%m-%d_%H%M")

fswebcam -r 1280x720 --no-banner $filename

auth="Authorization: Basic <Super Secret Base64 Encoding>"
content_disposition="content-disposition: inline; filename=\"$DATE.png\""

content="$(curl -X PUT --upload-file "$filename" "$url" -v -H "$auth" -H "$content_disposition" > /path/to/webcam/logs.txt )"
rm /path/to/webcam/*.png


After reading a lot on debugging what went wrong, I decided to update some things, this is my current setup:

# crontab
*/2 * * * * cd /path/to/webcam/dir; ./webcam.sh > ./webcam.log 2>&1

# bash script

DATE=$(date +"%Y-%m-%d_%H%M")

fswebcam -r 1280x720 --no-banner $filename

auth="Authorization: Basic <Super Secret Base64 Encoding>"
content_disposition="content-disposition: inline; filename=\"$DATE.png\""

content="$(curl -X PUT --upload-file "$filename" "$url" -v -H "$auth" -H "$content_disposition" )"
rm $filename

Receiving files from pi on remote server

I'm a JS engineer by trade, so I naturally turned to NodeJS as my default quick-firing solution.

I had a Digital Ocean box that I was already paying for, so I decided to throw this small script onto it, and it seems to have handled things fairly well. I tested it out with Postman to make sure that I was getting the right permissions and sending the right data, and it took a few iterations to get it to work right, but unfortunately I didn't save any of those. Probably should've used git.

#!/usr/bin/env node
const http = require('http');
const fs = require('fs');
const path = require('path');

// janky dotenv replacement
const env = fs.readFileSync('./.env')
  .forEach((curr) => {
    const [key, val] = curr.split('=');
    Object.assign(process.env, {
      [key]: val
  }, {});

const basePath = path.resolve(__dirname, 'pics');
const errorPath = path.resolve(__dirname, 'errors.txt');
const reg = /filename=\"(.*)\"/gi;

// try to get the filename from the header
function getContentDispo(req) {
  const contentDisposition = req.headers['content-disposition'];
  const match = reg.exec(contentDisposition);
  return match[1];

// handler for the http request
function handlePut(req, res) {

  try {
    // get the filename and create a filepath
    const filename = getContentDispo(req);
    const filePath = path.resolve(basePath, `${filename}`);
    let writer = fs.createWriteStream(`${filePath}`);

    // make sure to say everything went great.
    writer.on('close', () => {
      res.statusCode = 201;

    // write the content of the request to the file path
    let reqWriting = req.pipe(writer);
  } catch (e) {
    // sure, we can handle errors.
    const errorStream = fs.createWriteStream(`${errorPath}`)
      ERROR ENCOUNTERED IN MATCH: ${(new Date()).toLocaleDateString()}

// janky auth.
function validateAuth(req) {
  const auth = req.headers.authorization;
  const base64 = auth.split(' ')[1];
  const userAdmin = Buffer.from(base64, 'base64').toString();
  const [user, pass] = userAdmin.split(':');
  const matchPass = process.env.password === pass;
  const matchUser = process.env.user === user;
  return Boolean(matchPass && matchUser);

// create the server to handle requests.
const server = http.createServer((req, res) => {

  const validAuth = validateAuth(req);

  if (!validAuth) {
    res.statusCode = 401;
    res.statusMessage = 'You are not authorized'

  console.log(req.method, req.url, (new Date()).toLocaleDateString());
  switch (req.method.toUpperCase()) {
    case 'PUT':
      handlePut(req, res);
      res.statusMessage = 'ok'
      res.statusCode = 200;

server.listen(8080, (lis, err) => {
  console.log('server listening on port 8080')

I'll probably do a walkthrough another day, but this is how I like to write small web services. I try to only use the modules available to me in a fresh node install, and love playing around with stream,s callbacks, etc. I could've done some clever async code too, but that's probably for the next revision. I'm using git now.

Essentially, this server only accepts POST requests, and only when the auth header matches exactly.

It then extracts the filename from the content-disposition header and uses that as the filepath when writing.

That's it, pretty much. Nothing fancy, pretty much definition of an MVP.

Saving the files for processing

I decided that my ever-growing pics/ directory was getting out of control, and opted to do some kind of periodic grouping & dumping. I still haven't automated the ftp part, but we're getting there. This is the shell script that checks whether we have 100-ish photos, and when we do, group them into a folder named after the current timestamp.


count=$(ls "$base"/pics/ | wc -l )

DATE=`date '+%Y_%m_%d_%H_%M_%S'`

if [[ $count -gt $max ]];
  echo "time to clean up"
  mkdir -p "$base"/"$DATE"
  mv "$base"/pics/*.png "$base"/"$DATE"
  rm -rf "$base"/pics/*.png
  echo "we are still under, with $count"

This was super easy to then ftp into my server periodically and download all of the photos. My initial strategy involved aggressively compressing them, but I encountered some really strange behavior, which I'll document later, probably. Some standout moments were "this png is not a png" when running ffmpeg -i, and somehow corrupting the tar.gz files. So it made more sense to just create big folders and download periodically for processing.

Timelapse processing

So I call myself the ffmpeg butcher because I broke everything repeatedly, including removing entire folders' worth of photos instead of debugging. I'll leave you with a few notes:

  • I used this guide to use ffmpeg to batch all of the webms together.
  • I wrote a shitty script that looked for a folder like this, looped through each, found all webms, and wrote it to one big webm w/ the folder name as the title.
  • I took those webms, and grouped them together using the above guide to make one BIG webm for every day's worth of pepper pics, and then eventually combined those. Essentially just reducing and reducing until I had larger units of time.

Eventually, I was rewarded with something like this:

I'm still working on building a longer timelapse, might use some fancier software, but ffmpeg has done the job well, if not brutally.

Death of Pi

The date: Friday, 30th of March, year 2018. The time: 23:42:48 UTC. The log:

Mar 30 23:42:48 raspberrypi systemd[1]: Caught <SEGV>, dumped core as pid 29604.
Mar 30 23:42:48 raspberrypi systemd[1]: Freezing execution.

sidenote, check out this post for info on how to read syslog

Debugging shit:

If you're curious, this is my current progress on debugging the death of the pi, and I ended up getting it restarted a few days ago.


Thanks for reading!