Pepper Refactoring

Pepper Refactoring

In my last pepper-related post, I talked about my methods for getting up and running with my timelapse on my raspberry pi at work.

Old Methods

On Raspberry Pi:

  • Cron job to generate png from USB webcam
  • Send png to remote server on Digital Ocean

On Remote Server

  • Receive png and store in subdirectory
  • Cron job
    • Check if there are 100 photos
    • If there are, package into folder and put in separate directory

Local Machine (manually)

  • FTP into remote server
  • Copy 100-photo folders into local directory
  • Run ffmpeg script on folder to generate webms
  • Concat webms together by day.

Bottlenecks

So the bottlenecks I ran into here, were 1) I kept forgetting to log into my remote server and 2) My server got a shitload of images building up.

I noticed on my local machine that the webms were considerably smaller than the 100 pngs, especially at 24 fps, so I asked myself "why not just get ffmpeg installed on my remote server and do the parsing there?"

This kicks the can further down the line, but I'm not ready to do the next shit yet.

Hot Newness

Now I have a directory structure like so:

├── pics
│   ├── 2018-05-16_1500.png
    ***more pngs***
├── space_saver.sh
├── tmp
└── webms
    ├── 2018_05_11_19_42_01.webm
    ***more webms***

My space_saver.sh added a few key things:

copy_pngs - move all the pngs into a temporary directory

copy_pngs() {
  pics_directory=$base/pics
  tmp=$base/tmp
  # check if it's a directory
  if ! [[ -d $tmp ]]; then
    # if it isn't, make it!
    mkdir -p $tmp
  else
    echo "tmp exists"
  fi
  # move all the pictures in the pics/ directory to the tmp one
  mv $pics_directory/*.png $base/tmp/
}

ffmpegify - combine the pngs into one .webm file

ffmpegify() {
  DATE=`date '+%Y_%m_%d_%H_%M_%S'`
  filename=$base/webms/$DATE.webm
  ffmpeg -f concat -safe 0 -r 24 -i <( for f in $base/tmp/*.png; do echo "file '$f'";done) -r 24 -threads 4 $filename
}

cleanup - remove the tmp pngs

cleanup() {
  rm $base/tmp/*.png
}

So my script ended up looking as simple as this:

#!/bin/bash

base=<path-to-server>

# check how many pics exist
count=$(ls $base/pics/ | wc -l )
max=100

if [[ $count -gt $max ]];
then
  echo "time to clean up"
  copy_pngs
  ffmpegify
  cleanup
else
    echo "we are still under, with $count"
fi

Learnings and Next Steps

The biggest thing I learned was to test you bash scripts locally. I made a change to one of my ffmpeg-generating bash scripts and ended up scrapping 2k pngs-worth of timelapse footage. 83 seconds of pepper footage gone forever.

I also discovered bash functions, which made my life easier by isolating functionality and enabling the ability to bail early, or to provide meaningful error logs. I didn't do that, nor did I change my cron job to output to an error log, but that's on the list.

I've got a few TODOs on the project going forward:

  1. Add a gfycat integration so I can stop storing anything on my server
  2. Combine the webms by day. I figure some kind of regex could probably do this, but I have no idea as of yet
  3. Error logging