Monday, February 8, 2016

Gstreamer video wall / grid

Related to my previous post: My brother also asked me to investigate if there was a way to make it easier to search through the videos. My idea was to display multiple videos side by side simultaneously, to make it easier to scan though what the camera captured.

So far, I've come up with this script
gst-launch-1.0 \
    videomixer \
        sink_1::zorder=1 sink_1::alpha=1.0 sink_1::ypos=0   sink_1::xpos=0    \
        sink_2::zorder=1 sink_2::alpha=1.0 sink_2::ypos=0   sink_2::xpos=630  \
        sink_3::zorder=1 sink_3::alpha=1.0 sink_3::ypos=0   sink_3::xpos=1260 \
        sink_4::zorder=1 sink_4::alpha=1.0 sink_4::ypos=496 sink_4::xpos=0    \
        sink_5::zorder=1 sink_5::alpha=1.0 sink_5::ypos=496 sink_5::xpos=630  \
        sink_0::zorder=1 sink_0::alpha=1.0 sink_0::ypos=496 sink_0::xpos=1260 \
        name=mix \
    uridecodebin uri=file:///path/to/video1.mkv ! videorate ! video/x-raw,framerate=[1/1,10/1] ! queue ! mix. \
    uridecodebin uri=file:///path/to/video2.mkv ! videorate ! video/x-raw,framerate=[1/1,10/1] ! queue ! mix. \
    uridecodebin uri=file:///path/to/video3.mkv ! videorate ! video/x-raw,framerate=[1/1,10/1] ! queue ! mix. \
    uridecodebin uri=file:///path/to/video4.mkv ! videorate ! video/x-raw,framerate=[1/1,10/1] ! queue ! mix. \
    uridecodebin uri=file:///path/to/video5.mkv ! videorate ! video/x-raw,framerate=[1/1,10/1] ! queue ! mix. \
    uridecodebin uri=file:///path/to/video6.mkv ! videorate ! video/x-raw,framerate=[1/1,10/1] ! queue ! mix. \
    mix. ! queue ! theoraenc ! progressreport name="Encoding Progress" ! matroskamux ! filesink location=/path/to/output/file.mkv
Note: The zorder and alpha attributes are only included for sake of example. Further, note that sink_0 is actually the last video attached, in this case video6. Personally I think this is a serious mistake on the part of the gstreamer developers, to use such confusing nomenclature, but it's their framework and they can implement it how they want to.

This pipeline does actually do the job, putting 6 videos in a 3x2 grid, and encoding the result to a single video file.

However, there are two nagging problems that I'm running into.

The first problem is that videomixer seems to be slower than molasses. It can take upward of ten minutes for this pipeline to finish merging six 10 second videos. I totally understand that the mixing process is cpu intensive, but it seems like it takes an excessive amount of time.

The second problem that I run into is that the resulting video, when I play it with VLC anyway, displays only the first frame. If I drag the scrollbar, I do see the other frames, but releasing the scrollbar makes the video revert right back to only displaying a single frame.

If anyone has any insights into how to improve this pipeline, I'd be very interested in any advice.

Merging videos using gstreamer

My brother came to me the other day asking if I knew of a way to merge multiple small (approximately 30 seconds each) video files into a single, longer, video file. He works for a retail business and wanted to review some security footage more easily.

The GStreamer framework, as of version 1.6, now has a plugin that makes doing this operation significantly easier than ever before. The concat plugin simply takes 1 to N number of inputs, and activates those inputs one at a time, playing each entirely, filtering the EOS event from all but the last element, and then moving to the next. In this way, we can make other elements in the pipeline treat these multiple sources of data as a single, continuous, stream.

The ultra-simplified explanation is that we're simply playing each small video one after another with no pause in between, and then re-encoding the resulting video as a single file.

The problem that I was running into is two fold. First, the some aspect of the contact element, when running from the commandline using the gst-launch-1.0 debugging / prototyping program, becomes very hostile to handling more than, for example, 10 files at a time. I was able to consistently use a pipeline that looks like this

gst-launch-1.0 concat name=c \
    uridecodebin uri=file:///path/to/video1.asf ! queue ! c. \
    uridecodebin uri=file:///path/to/video2.asf ! queue ! c. \
    uridecodebin uri=file:///path/to/video3.asf ! queue ! c. \
    uridecodebin uri=file:///path/to/video4.asf ! queue ! c. \
    uridecodebin uri=file:///path/to/video5.asf ! queue ! c. \
    c. ! videoconvert ! theoraenc ! progressreport name="Encoding\ Progress" ! matroskamux ! filesink location=/path/to/output/file.mkv ;
To merge a handful of files at a time, but trying to do more than roughly 10 made gst-launch throw up errors very consistently

My second problem, which gets drastically compounded by the first problem, is that my brother handed me a thumb drive with over 5 thousand video files on it. Personally, I have no interest in sitting there typing out each file name one at a time, and to the best of my, admittedly limited) knowledge, gstreamer simply doesn't have an element that opens each file that matches some pattern and attaches them to a sink in a way that would be useful for this situation.

So, I need to write a meta program that generates a program that calls gst-launch with only a few video files at a time. I figured that if I could merge them 5 at a time, I could merge the resulting videos 5 at a time, and repeat that pattern until there were only around a dozen big video files, at which point he can search through them manually on his own.

So here's what I've come up with so far. It's a bit of a big-huge-hack, but at least it'll let me get the job done and move onto other projects.
mkdir -p encode_results
echo "runfunction() {" >> $OUTPUTSCRIPT
echo "    until \$@" >> $OUTPUTSCRIPT
echo "    do" >> $OUTPUTSCRIPT
echo "        :" >> $OUTPUTSCRIPT
echo "    done" >> $OUTPUTSCRIPT
echo "}" >> $OUTPUTSCRIPT
echo "" >> $OUTPUTSCRIPT
for f in `ls *.ASF | sort -g` ;
    if [ $COUNTER -eq 0 ] ;
        echo "runfunction gst-launch-1.0  --no-fault --gst-debug-disable concat name=c \\" >> $OUTPUTSCRIPT
    echo "    uridecodebin uri=file://$VIDEOFOLDER/$f ! videorate ! video/x-raw,framerate=[1/1,10/1] ! queue ! c. \\" >> $OUTPUTSCRIPT
    if [ $COUNTER -eq $NUMBER_PER_SWEEP ] ;
        echo "    c. ! videoconvert ! theoraenc ! progressreport name=\"Encoding\ Progress\" ! matroskamux ! filesink location=$VIDEOFOLDER/encode_results/encode_result_${FIRST}_${f}.mkv &" >> $OUTPUTSCRIPT
        echo "" >> $OUTPUTSCRIPT
if [ $COUNTER -ne 0 ] ;
    echo "    c. ! videoconvert ! theoraenc ! progressreport name=\"Encoding\ Progress\" ! matroskamux ! filesink location=$VIDEOFOLDER/encode_results/encode_result_${FIRST}_${f}.mkv &" >> $OUTPUTSCRIPT
    echo "" >> $OUTPUTSCRIPT
You should be able to simply copy it into a bash prompt after changing the filenames at the top.

It'll generate a new script. That new script will all the encoding jobs in parallel. If any encoding job fails to complete successfully, then the "runfunction" will automatically start it again.

Thursday, February 4, 2016

How to manage an embedded Linux firmware update system

A friend of mine is working on an embedded Linux device that allows you to manage some security cameras that he has. I've also worked on setting up numerous systems like this for hobby projects in my personal time. Finally, my day job has started investigating this type of setup. Clearly this is a common scenario, so just for the purpose of getting all my thoughts down so I could digest them, I decided to write this post.

My friend and I got to talking the other week about how to manage automatically deploying OS updates to his device when the device is in the hands of customers, plugged into their network. Since an update failure would essentially render the device useless to a customer, causing an expensive RMA, and manual recovery operations by a technician, we wanted to investigate what standard-ish Linux utilities could be used to ramp up the reliability of the upgrade process.

What follows is essentially a brain dump of what I think a well put together system would look like. Be nice about typos, this is a very stream-of-thought document.


  • UEFI firmware
    • Supports Secure Boot
    • Supports either modifying EFI variables by the boot loader or modifying data on the system boot partition by the bootloader.
  • x86_64 processor
  • Hardware watchdog
    • Watchdog verified to work properly with standard Linux utilities. E.g., available via a package manager for a distribution, not some crazy custom program and/or proprietary driver.
  • Internal, bootable, storage with sufficient space to store two full copies of the firmware with some scratch space left over.

Basic partitioning layout

GPT partition table. UUIDs determined statically for all instances of the product.
  1. UEFI system boot partition  -- Assigned UUID 1
  2. Firmware-A  -- Assigned UUID 2
  3. Firmware-B  -- Assigned UUID 3
  4. Data  -- Assigned UUID 4
Please note that there's no swap partition. Swap is highly useful for desktop, or high power server type situations, where overcommitting resources is a desirable capability, but an embedded setup really shouldn't run into a situation where you need more ram than is available.

Basic boot strategy

Image integrity

Using SecureBoot, a UEFI feature, we can use our private signing keys to ensure that the bootloader installed on the system is one that we deployed. Further, we can configure the bootloader to verify that the kernel that loads is also one that we deployed. Basic boot integrity is important, after all.

Verifying that the firmware image to boot is cryptographically signed, to ensure the image as a whole is known-good, is a good thing to consider as well, even though it may involve some CPU time spent doing the verification on such a large amount of data.

Boot availability

Somewhere in either the EFI system variables, or the system boot partition, we need to store a variable that tells us which partition to load. Essentially this can be the PARTUUID of the GPT partition. We also need a variable that holds a counter, representing the number of times we've attempted to boot this partition without the OS changing the counter back to 0.

We need two protection mechanisms to ensure we eventually manage to boot properly. The first mechanism is that many bootloaders can be configured to trigger some action when the kernel panics. This is a graceful failure, allowing us to fall back to a known good state.

The problem is what to do if the kernel simply hangs. Maybe there's some race condition, and it never quite gets to the panic, which can allow our bootloader to recover. Here's where the hardware watchdog comes into play. If we assume that the hardware watchdog will reset the machine if (and only if) the machine fails to send it a heartbeat signal within some amount of time that should be sufficient to boot properly, we guarantee that even in a worst-case scenerio, we'll always return to the bootloader to try booting again.

The final bit of magic is in the counter saying how many times we've failed to fully boot. Just before executing the kernel, the bootloader increments the counter for how many times it's tried that partition by one. If the counter reaches some upper limit, then the bootloader tries to boot from the other Firmware partition. (If it's using A, it switches to B, and vice versa). The upper limit can be stored anywhere. Hardcoded, another EFI variable, somewhere on the system boot partition. It doesn't really matter.

Basic runtime strategy

Post boot actions

If the system boots up fully, then among other things, one of the processes that needs to be launched is some program that resets the "number of boot attempts" variable to 0. If this process doesn't launch, then after some number of reboots, the system will still switch to the other firmware. Frankly, this process failing to launch and execute it's task should be considered a hard failure for BOOT, not for runtime. This is an important distinction.

Using the systemd init system, it's possible to set service dependencies. Say your device, ultimately, needs to run 3 different services that stay alive at all times. Your dependency graph might look something like this

                Boot Counter Reset
                     //        ||      \\
        Service A  Service B  Service C

Where Service A, Service B, and Service C each have a systemd watchdog (different than the hardware watchdog, see the WatchdogSec= option of systemd.) configured, and use the sd_notify api to continually inform systemd that they are alive and functioning. 

If any of service A, B, or C fail to launch, initialize, and run, the hardware watchdog and boot counter reset program won't launch. This will cause the hardware watchdog to trigger a system restart. If this happens enough times, the alternate firmware will be loaded.

The boot counter reset service is of type "oneshot", as it doesn't stick around after setting the counter variable.

Additional watchdogging

The services in systemd should probably be configured to restart on failure. This way, if there's a minor problem that doesn't happen consistently, the service will automatically heal itself. Obviously, putting a limit to the number of restarts is a wise choice. If the service fails too many times, the hardware watchdog heartbeat service will be shut down, which will automatically trigger a system reboot.

An additional advanced feature would be to automatically detect too many service failures, and switch to the other firmware automatically. If a service is failing twice a minute, that's a good indication that there's something wrong. Essentially to accomplish this, when the hardware watchdog triggers (or is about to trigger, depending on how this gets handled) increment the number of times the hardware watchdog has triggered, and then reboot.

The bootloader should then check, in addition to the boot attempts number, the variable for the number of watchdog triggers, and if EITHER of them is above the threshold, to switch to the alternate firmware after clearing both counters.

Firmware image


To ensure system integrity, mitigate tampering, and generally just make things have less moving parts, the firmware image, when loaded by the kernel and mounted as the root partition, should be strictly read-only. The storage on the data partition is the only part of the system that should have a mutable state during normal operations. This is basic deployment hygiene, and has obvious benefits and essentially no downside.


Among other reasons, a very compelling one is "why not?". There's almost no downside to this, aside from an incredibly small amount of boottime CPU usage. A properly stripped down firmware image will fit easily into the systems file cache, and so after boot is finished, accessing the firmware file system will, essentially, involve no disk IO at all.

If the runtime cost to access parts of the compressed image are concerning, ,there's also always the option of copying the firmware (uncompressed) wholesale into a ram-based filesystem, and using that as the root filesystem instead. That will guarantee that no disk access happens after boot for root partition related usage.

There are, of course, other considerations for compressing your images. including faster firmware download times, faster firmware installation times, faster firmware cryptographic signature at boot (if you're using that).

Recommended type

I recommend using SquashFS with the XZ compression algorithm. Linux can boot a partition that is simply a SquashFS image written directly to the raw partition as the root filesystem, WITHOUT the need for an initramfs. Aside from the filesystem being read-only, it works identically to if you used any other normal filesystem, with great filecache properties, possibly saving you some (or all) disk accesses.

Deployment considerations

There are a couple ways to manage doing a firmware update. If you're using SquashFS with the XZ compression, you'll already have your firmware images as small as they are practically ever going to get without removing some of the files contained within them. This can drastically reduce your download size, when compared with downloading the uncompressed image.

An interesting thing about the way squashfs handles the compressed data, is that instead of being a single stream of bytes, squashfs is actually an organized collection of compressed chunks of data. When compressing a given filesystem tree, the squashfs creation tools will always compress the data in the same order, and it'll do it in specific byte sized chunks. This means that a small change to the original data will stay local to the parts of the squashfs filesystem that represent the files that were modified.


When you consider common large-file transfer tools, such as rsync, this means that rsync can very efficiently update a squashfs image from an older version to a newer version, since only a small amount of the squashfs file will have changed. Of course, if you make a very large change, all bets are off.

Binary diff

Even better than rsync, though, is that you can pre-compute the differences between the current firmware and the newly deployed firmware. Each system that needs the firmware update can simply download the binary diff that you generate once, and then apply that binary diff directly to the raw partition that the firmware to be replaced is stored on. Of course, make sure you cryptographically sign the binary diff, and have the firmware upgrade process verify the signature before applying it.

If, for some reason, the firmware stored on the currently-not-in-use partition is a version that has no binary diff available, the firmware upgrade system can either request that the server create a diff on-demand, or can simply download the full update and apply it directly. Either method is fully functional.


Depending on what kind of Linux distribution you're using, it might be important to consider the upgrade path from very old firmwares to the latest and greatest. Considering you're deploying read-only images, there really should be very little reason to worry about a failure, but do keep this in mind when deciding on how to handle the actual download process, and what method of applying the firmware you end up picking.

If you're using a rolling-release distribution, such as Gentoo, a lot of packages can change without much warning in simply a few months. Obviously properly testing things is important, but it might be worth making a decision between doing an occasional "package freeze" versus a continuous firmware generation / deployment / self test system on internal hardware that gets used for dogfooding. It's a tough call, and there are pros and cons for both sides.

The right answers for your setup are entirely up to you.

Upgrading the bootloader and kernel


The kernel is stored separately from the firmware image, in the system boot partition. Depending on what bootloader you're using, you might need to name the kernel a specific filename, or you might need to overwrite a configuration file to point to the new kernel. In either case, there are a few failure scenerios here.

Unlike the firmware image, where we're already booted into a known-working image that we can fall back on if the upgrade fails, the kernel is a narrower point of failiure. 

When writing the kernel to disk, always write to a temporary name, and then mv the kernel to it's real filename after it's on disk. This helps protect against power failure when the kernel is only half-written to disk. 

Consider setting your firmware up so that if the main kernel fails cryptographic verification, you boot from the previous kernel.

Consider also that it's possible for your kernel to pass cryptographic verification, but still fail to boot properly. As discussed in the booting section, a failed boot should result in the firmware image being switched. This doesn't help if the problem is with the recent kernel upgrade.

Address this problem by setting (yet another) efi variable, indicating that we just updated the kernel before rebooting into the bootloader. Perhaps with the value 1. Change that variable to indicate we're about to attempt to boot the new kernel, perhaps to value 2. If the bootloader encounters the value 2 when starting up, we know the new kernel failed to boot, so we'll revert to the old kernel. Just enhance the number-boot-retries clearing program to also clear this value upon successful boot.

Of course, this strategy only saves you if your kernel fails to boot. If the kernel has other problems that prevent things from working properly *after* you've cleared the efi variable, you'll encounter a reboot loop. Ultimately, it's *important* that you conduct serious regression testing before deploying updates. 

Careless human action can still break things no matter how many double checks are added to the system. Perhaps only make the boot variable clearing program clear the variables after some numerous amount of minutes. Or flag a variable that boot completed fully, but service initialization failed, to your high availability logic. It's up to you.


Generally, the process for upgrading the bootloader itself is very similar to that of the kernel. Write out the new file(s), copy the old files, mv the new files to the names of the old files, to minimize the amount of time the system is in an inconsistent state in case of power failure.

Alternatively, consider writing the new bootloader to a new name, and then changing the UEFI bootloader priority / default choice using the EFI variables

Depending on how your main board is designed, it might also be possible to have multiple EFI system partitions on the same disk, and configure fallback settings, where if the main bootloader fails for some reason, then it falls back to the other. If your board can be configured to work this way, you can actually skip all the manual file writing and instead upgrade the bootloader and kernel in a single image just like with the firmware images, just replacing the whole partition with the downloaded filesystem image, and then changing the default bootloader in the EFI.

Bios upgrade

It's probably not a good idea to do this :-)

If you insist, I heavily recommend getting a mainboard that has dual bios chips, with automatic failover.

Other thoughts

Saving even more bandwidth

Bittorrent for downloading the new images. Everything's cryptographically signed right? Might make your customers a bit grumpy that you're using their bandwidth though.

If rsync or binary diffs aren't your style, consider the zsync tool, which pre-computes the hashes that the rsync algorithm needs to do it's job.

Strip your firmware to the point of being naked

The less things you have in your fully functional firmware images, the less things can go wrong. Smaller security surface, smaller bug surface, smaller bit-rot surface, smaller transfer corruption surface, just in general you'll be much happier the smaller your images are.
  • Strip out things like include files, compiler toolkits, userland utilities (probably don't need "ping", for instance). 
  • Kill debug symbol files.
  • Remove all optional language files.
  • Statically compile as many of your programs as you can (Of course, make sure this actually saves you space).
  • Do a dependency analysis and ensure that 100% of the shared object files on the system are used by a binary that your system requires to function.
  • Consider compiling everything with -Os instead of -O3.
  • Remove man pages.
  • Recompile programs to remove unused features (gentoo is really nice for this :-) )
  • Compile as many parts of your system with link-time-optimization as you can. Gentoo is great for this, but be warned there are several programs that may not compile this way. 
  • Remove example configuration files, and remove normal config files where you want the default values of settings, and the config file is optional.
  • Remove the package manager of the linux distribution you use, and all it's associated utilities.
  • Consider alternative implementations of libraries and/or programs. The Musl libc replacement is quite a bit smaller than, say, glibc. Be warned though, there may be porting issues. Consider toybox instead of busybox. That kind of thing.
  • Consider removing bash from the system entirely :-P
  • Remove, remove, remove.
If you need some ability to log in an interact with a live system for diagnostic purposes, seriously consider making two versions of every firmware. One with nothing but the bare minimum, and then another with the various debug and troubleshooting tools included.

With the Linux kernels OverlayFS, if you setup another partition on the device to hold the troubleshooting utilities as a separate partition, you can boot the system such that the contents of the second partition are overlayed on top of the first, appearing to be a single filesystem. In this way, any file collisions have the partition mounted second win the collision, with the file from that partition being shown. 

In this way, your normal run state is still LEAN MEAN EFFICIENT MACHINE! While still providing access to rarely needed tools.

Small kernels are required

I seriously, fanatically, recommend a moduleless kernel. If your firmware needs some specific functionality, compile that functionality into the kernel directly. Anything that doesn't talk to the hardware you have, or provide a feature one of your services needs, should be removed. Remove support for all filesystems. Remove support for 32bit binaries. Remove support for multi-media devices. Kill the entire graphics drivers subsystem if you don't need graphics. 

By removing as much stuff as you can, you make your entire environment more efficient (cache friendlyness), reduce your security surface, and reduce your bug surface. All big wins.

Most importantly though, modules make booting *hard*.

Other deployment environments


Cloud / Virtual Machine