Skip to content
March 7, 2016 / ftth

Properly cancelling a gobject action

When you use glib’s timeout_add or idle_add (examples below are for python), you can cancel the action before it is run by executing source_remove.

def action():
 return False
tid = gobject.timeout_add(1000, action)

However, if the action was already run, the source_remove will trigger the following warnings in the terminal output:

GLib-CRITICAL **: Source ID XXX was not found when attempting to remove it

The documentation states that “It is a programmer error to attempt to remove a non-existent source.” because “source IDs can be reissued after a source has been destroyed”

If you use gobject to schedule a large number of actions, it means that upon cancelling the actions batch, you will end up with tons of warnings on the terminal for each action that has already been run.

I encountered this issue while developing a python benchmarking script that uses gobject to schedule the execution of simulated users. Before running the mainloop, gobject.timeout_add is used for one-time execution of adding simulated users during the test duration. Every source id is stored inside a list so that upon cancelling, we can batch-cancel them; however, there is no way to tell if a particular action id is still valid, so batch-cancelling is done on all actions (regardless if they have been executed already or not).

A way to fix this (thanks to matplotlib‘s zeroSteiner for the tip) is to get the gobject mainloop context, and check if the gobject action source id is still in the execution queue:

context = ml.get_context()
action = context.find_source_by_id(tid)
if action and not action.is_destroyed():

However, the documentation states that “It is a programmer error to attempt to lookup a non-existent source.”, so maybe that’s not the cleanest way. If you know a better one, please let me know.


January 9, 2016 / ftth

Benchmarking gstreamer OpenGL performance on raspberrypi

After a lot of tinkering (videotestsrc, using temporary gdp files in /tmp), i finally found the optimal way to benchmark the pi gl display performance (of course blocksize needs to be adjusted to a raw buffer size).

gst-launch-1.0 filesrc num-buffers=100 location=/dev/zero blocksize=8294400 ! videoparse format=rgba width=1920 height=1080 ! glimagesink sync=false ... Execution ended after 0:00:09.687275649

Then divide the number of buffers (100) by the execution time (9.69), giving 100/9.69 = ~10 fps.

videotestsrc was quite slow, gdpdepay was killing performance (inducing additional copying), and working with raw files in /tmp was a little bit slower (but still the best way to test pre-rendered samples, e.g. for encoder benchmarking). Using /dev/zero allows to generate completely zeroes buffers very cheaply (meaning completely black and transparent images).


Note that Gstreamer 1.4 (raspbian) and 1.6.2 (arch) have the same performance, and results seem to be the same on Pi B+ and 2.

Just uploading raw 1080p video to the GLES context (without displaying) runs at 23 fps, representing the actual memory speed bottleneck (= 1.5 Gbits/s !)

gst-launch-1.0 filesrc num-buffers=100 location=/dev/zero blocksize=8294400 ! videoparse format=rgba width=1920 height=1080 ! glupload ! fakesink
Execution ended after 0:00:04.234214792

Unfortunately, this means that the RaspberryPi isn’t usable for any project requiring 30 frames per second HD display performance using Gstreamer.




December 11, 2015 / ftth

Compiling glimagesink for GLES (without Xorg) on RaspberryPi

The original RaspberryPi has awesome software support for many things, but when things get to multimedia capabilities, apart from the reference implementations (omxplayer, raspivid…), things tend to get messy or inconsistent, requiring you to recompile heavily patched packages.

As an example, i tried to compile the latest Gstreamer OpenGL elements without Xorg support so that it is possible to use OpenGL without the whole Xorg stack, for e.g. kiosk-style uses. Moreover, the current OpenGL/GLX/Xorg stack on the RaspberryPi is not hardware-accelerated (yet), so using GLES using the vendor SDK should offer better performance.

After a lot of fumbling, here’s a quick summary of the steps required to compile it (most of the time spent was to figure out the CFLAGS and LDFLAGS necessary  to use the SDK OpenGL headers, and complicated by a “bug/feature” with headers within the raspberrypi official sdk and the fact that Mesa libraries can be used at runtime instead of the rpi sdk):

pacman -S gstreamer base-devel
git clone git://
cd gst-plugins-bad
git checkout -b remotes/origin/1.6
./configure CFLAGS="-I/opt/vc/include -I /opt/vc/include/interface/vcos/pthreads -I /opt/vc/include/interface/vmcs_host/linux/" LDFLAGS="-L/opt/vc/lib" --disable-gtk-doc --disable-opengl --enable-gles2 --enable-egl --disable-glx --disable-x11 --disable-wayland --enable-dispmanx --with-gles2-module-name=/opt/vc/lib/ --with-egl-module-name=/opt/vc/lib/

[go grab coffee/lunch/...]

mkdir -p ~/.local/share/gstreamer-1.0/plugins
cp ./ext/gl/.libs/  ~/.local/share/gstreamer-1.0/plugins

Then, after copying the relevant .so files to ~/.local/share/gstreamer-1.0/plugins, you should finally be able to display video using gstreamer:

GST_GL_WINDOW=dispmanx GST_GL_API=gles2 GST_GL_PLATFORM=egl gst-launch-1.0 videotestsrc ! glimagesink

The environment variables are not even be necessary if only a single platform is supported (which is the case here), so you can even run

gst-launch-1.0 videotestsrc ! glimagesink


I also created an AUR package (gst-plugins-bad-rpi-nox) for easier operation — but that will still take a few hours to complete.

Many thanks to Julien Isorce and Matthew Waters for their help !

September 10, 2015 / ftth

Using an Apple Remote A1156 on RaspberryPi/Openelec

I got a TSOP4838 IR sensor and connected it to my Rpi2 (running OpenELEC 5.0.8) as follows:

(credits to adafruit)

1) Log into the RaspberryPi over ssh and edit /flash/config.txt

mount -o remount,rw /flash
echo "dtoverlay=lirc-rpi" >> /flash/config.txt

2) create an autostart script that will restart lircd with appropriate options:

echo "killall lircd; mkdir -p /var/run/lirc/; /usr/sbin/lircd --driver=default --device=/dev/lirc0 --uinput --output=/var/run/lirc/lircd --pidfile=/var/run/lirc/ /storage/.config/lircd.conf" > /storage/.config/

Let’s make it executable:

chmod +x /storage/.config/

3) Put into /storage/.config/lircd.conf

wget -O /storage/.config/lircd.conf

Reboot, and make sure that lircd is running with the options above.

OpenELEC:~/.kodi/userdata # ps | grep lirc
353 root 0:00 /usr/sbin/lircd --driver=default --device=/dev/lirc0 --uinput --output=/var/run/lirc/lircd --pidfile=/var/run/lirc/ /storage/.config/lircd.conf

4) to check if it is working properly, the irw command should show actual output when you press the buttons:

OpenELEC:~/.config # irw
 4e 0 KEY_KPPLUS devinput
 4e 1 KEY_KPPLUS devinput
 4e 0 KEY_KPPLUS_UP devinput

5) Put the following into /storage/.kodi/userdata/Lircmap.xml (the upper-case L is important)

 <remote device="devinput">

The important part (that made me scratch my head for quite some time due to broken guides and docs all around) is to match the remote device with what irw is outputting, and map the key names with the ones irw is showing.

April 29, 2015 / ftth

Docker may just have won the container war

In a previous post, i discussed virtualization technologies as seen under the scope of the largest ready-made appliance catalog, comparing containers and image-based techs: OpenVZ, Docker and vmware images.

“Standard”/format-wise, even if Docker clearly had the popularity edge for some time already, launching VMWare images is still the gold standard behind company firewalls; however, the announce from VMWare to support Docker natively through their Open Source Lightwave and Photon projects on April 20th is indiscutably big news: the king of virtualization will support Docker (as well as CoreOS Rocket — LXC under the hood — and Pivotal Garden container formats), which in my opinion is the last nail on the coffin of OpenVZ, and the consecration for Docker.

Shortly before that, Google announced and published Kubernetes on the beginning of April, it’s own Open Source container orchestration solution built on top of CoreOS and Docker (and working on Rocket support). Kubernetes comes with multi-provider support (Azure, AWS, Rackspace, VMWare Vsphere …) and a few app examples (WordPress / MySQL, Celery / RabbitMQ, Cassandra, …).

Last November, AWS added ECS (EC2 Container Service) which is about running Docker images, as did Microsoft announce for Azure, and even Windows Server end of February.

So many announcements at the same time, VMWare adding support for containers with Docker in an Open Source project, and at the same time Google publishing Kubernetes, their next-generation container orchestration software based on Docker even before the scientific publication as well as launching a managed version of Kubernetes, Google Container Engine

Docker is there to stay !

February 10, 2015 / ftth

Forcing HDMI colorspace to Full on intel haswell / Arch Linux

Moving to a new i7-4771 computer (haswell), i had to connect one of my two monitors using HDMI. Unfortunately, the contrast seemed really low compared to the other (identical) display, exactly as if the brighness had been boosted and contrast lowered. A washed out, dull picture.

What happens is that the Intel GPU i915 driver falls back automatically to a limited colorspace (that is normally supported by flat TVs).

The awesome Arch wiki provided the solution, but does not give a lot of hints on where to deploy the fix.

I ended up putting the following shell script into /etc/X11/xinit/xinitrc.d/

if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then
/usr/bin/xrandr --output HDMI2 --set "Broadcast RGB" "Full"

(don’t forget to replace “HDMI2” by your own display name, as reported by xrandr).

It’s sad to think that a 2013 released processor/GPU still has this kind of issues. Oh, and did i mention i am using the latest stable kernel/driver stack ?

Edit: another way to fix this is using a DVI-HDMI adapter on the display, it forces the GPU to use proper DVI/RGB with full colorspace depth.


February 9, 2015 / ftth

Public virtual appliance repositories: who has the largest community ?

If you are an SME sysadmin or just wanting to host your own services, then you may wish to “click and run” stuff in order to test it. While installing and running an Android app is quite straightforward, that’s really not the case for Open Source web services (like wordpress, wikis, LDAP server, owncloud, gitlab, …) that are needed to power a small business. And that’s a pretty big deal if you are reluctant to relying solely on free/public cloud services like Dropbox.

The focus of this article is not really to discuss which is the better technology among the plethora of existing virtualization bits and pieces (Eucalyptus / amazon AWS / libvirt / LXC / OpenVZ / Xen / Parallels / CloudStack / …) but rather which solutions go beyond, by offering a collection of publicly available, ready-to-use images; in the end, i believe that’s what this is all about for SMEs: if you are looking for ready-to-deploy templates/recipes/images, then it means that you are not really into sysadmin or don’t have time for this, except for your core business (e.g. your product). And deploying some new tool with highly insecure development-level options (no web frontend, sqlite database, …) will make it hard to migrate it into production — in the case your user actually liked the tool you deployed “just for testing”. As such, it makes sense to deploy something a little bit more ready to use in production.

The ideal tool is a virtualization host platform, with a web-based user interface, that offers an integrated click-and-run experience directly from within the GUI.

The contenders

At the time of evaluating available solutions, we (very briefly) looked at the VMWare Exchange marketplace, which offers what we were dreaming of: a virtual appliance marketplace. However, the prices of VMware and lack of paravirtualization pushed us away to consider Proxmox (see full or para- virtualization compared to virtual containers).

In our office, we have been running quite a large number of internal services working on the very powerful Proxmox virtualization server since a few years, that fortunately includes the templates repository (more about that later).

More recently, Docker came in, which tackles a similar problem with a different, more atomic approach, by running inside isolated containers as well, but without requiring a full system image; this is a more devops/cloud oriented approach that provides a publicly-available “appliance” marketplace.

A surprising new contender is actually Synology, which offers Add-on packages, as well as a community addons website; that’s a pretty clever move, since many SMEs and individuals do buy their cost-effective NASes. This allows to run e.g. a wordpress blog with a click-and-run experience. Although this is a sub-professional product (many packages are in fact end-user / mass market products like multimedia services), i’m pretty much convinced that a huge number of these devices are actually swarming in many places, possibly becoming the biggest self-hosting platform in the coming years.

Virtual appliances providers

Even if they are not providing the host virtualization platform itself, a few players exist that provide cross-virtualization compatible images, ready to use. They are a needed partner for the host virtualization platforms to provide a large panel of ready-made “recipes”.

The two virtual appliance providers i know of are and They both focus on building a unified framework that generates virtual appliances for at least VMware and the major cloud providers (e.g. Amazon AWS).

Proxmox VE

While based on the quite venerable OpenVZ technology (from the company Parallels), which requires running a patched kernel (and obviously outdated, 2.6.32-29-pve at the time of writing), proxmox had the very good initiative of integrating with; this means that from the management web interface, you get to import templates directly from the website.

That way, you can download and run any (supported) software nearly instantly, without tinkering too much with the boring sysadmin stuff — not to say that sysadmin stuff is boring per se, but that it is overkill when you just want to test/try something. Proxmox / turnkeylinux do the job very nicely, but not much activity is coming lately (the last Proxmox VE version was made available on 15 sept. 2014, and the turnkeylinux guys are pretty much silent as well, with latest commit activity around July 2014). Also, a relative downside is that you need to download a complete filesystem image, which reduces the advantages of container technology — you end up duplicating the same stuff all over again.


Popularity contest

Here is a Google Trends capture showing that Docker is clearly taking a large bite at VMware and skyrocketing :


Obviously, this graph is mostly rewarding the PR departments of the two big players (WMware lists around 15k persons on linkedin, Synology 214 — popularity boosted by it’s mass-market products — while Docker is “only” 74 people, Bitnami 26 and i believe that turnkeylinux are ~2 people).

Templates: the numbers

But how do these solutions compare in terms of available ready-made images/templates for the click-and-run experience that we are looking for ?

  • by looking at the turnkeylinux github  repository, around 120 official appliances are available; after using quite a number of these, i can tell that they do work pretty nicely
  • even if the Docker registry home page title claims “Use one of the 14000+ Dockerized apps to jump-start your next project”, the actual official images repository is only around 60 appliances available, and there is apparently no way to dump/count the total amount of public apps anywhere
  • the WMware virtual appliances marketplace boasts a highly impressive count of 1900+ ready-made virtual machine images ; a closer look reveals that a large chunk (1270) are automatically generated (and mostly outdated) images from bitnami “stacks” from (counts 104 actual Open Source products); funnily enough, the turnkeylinux appliances also end up in this huge list
  • The synology app list contains 75 official packages, and the community repository 67, which rises the bar to an impressive 140+ apps
  • the official LXC git shows 17 templates which are very generic templates

An example app: Owncloud

A fast-deploying webservice that you may want to host yourself is the awesome Owncloud “open source Dropbox clone” (and more). Let’s compare how our contenders do regarding to Owncloud:

  • docker search owncloud | wc gives out 26 variants (!!!)
  • turnkeylinux has an owncloud app image
  • the vmware marketplace returns around 18 variants, most of them outdated stacks from bitnami (and including the turnkeylinux one)

The missing pieces

Surprisingly, nobody started any LXC-based community template sharing yet. This should be the actual OpenVZ-killer, since this is now mainline kernel.

To wrap it up

We can only salute the transparency of the Proxmox/ combo, which displays actual numbers boldly and provide an actual end-to-end product with a nice user interface; i hope that the lack of updates is only the sign of good things to come: they do an incredible amount of nice work with very limited resources, and wish them luck. Unfortunately, OpenVZ technology seems to be declining, even if Parallels just announced the merging of OpenVZ and Parallels Cloud Server into a single common open source code base; this means that Proxmox will not run on very modern hardware, therefore it’s future is quite uncertain (at least on the OpenVZ front). It looks like it’s too late for making a U-turn, especially because apparently bitnami dropped support for Parallels some time ago (otherwise, it would have boosted the amount of available software). I hope that Proxmox will consider integrating other sources of images directly into their web management interface (from Docker or the VMware marketplace ?) and implement LXC support to replace the aging OpenVZ.

Docker has generated a lot of noise lately, but it’s probably not meant for click-and-use service deployment (rather, for type-and-deploy). Not being an expert here, i am very much interested if an easy-to-use web frontend exists with integrated click-and-launch features (like the apparently experimental which offers 18 templates). Interestingly, systemd 219 will add support for downloading Docker images directly in a new tool called systemd-import; this adds to the recognition of Docker as “most popular container solution used today”.

Synology is suprisingly fast, but it’s not virtualization, which excludes it from this roundup; however, should they add proper LXC based virtualization, and they may well become a fast-growing contender: there are a lot of their devices out there, and their current system supports an update mecanism already.

VMware is the no-surprise dominant marketplace and probable de-facto standard; their virtualization is the biggest worldwide deployment (~50% market share in 2014); however, the GUI integration directly within the control user interface is pretty minimalistic (to say the least) and does not really allow for the pure “click and run” experience. It is however no surprise that Owncloud themselves provide an official VMWare image as main commercial platform.


Clicking on the “Download a Virtual Appliance” does supposedly open up a web browser on the VMware marketplace website (but on my system, it opened a text editor containing the HTML of the page…).

Bitnami is a very impressive player, because they generate native installers on top of VMware and push-to-cloud integrations, which should provide the company with enough money to run and maintain the large quantity of stacks they are providing. They even announced a Docker compatibility layer recently, so that’s definetly a company to watch. We’re however still to see a user-friendly UI integration into an actual virtualization management interface. Also, bitnami is currently not open to contributions (in the contrary to turnkeylinux), nor Open Source.

Setting VMWare aside, i believe there no clear winner here just yet: a clear winner would be the standard, de-facto format, that any serious web-based project would deliver it’s product on. As far as i know, the only standard out there for Open Source products is still the source code itself (or installer), either in the form of a tar.gz archive (old-shoold) or git branches (github).

Getting back at a simple click-and-deploy all-in-one solution, a github-based LXC image community together with a nice web-based frontend (like lxc-webpanel) could serve this purpose quite elegantly — as long as the community picks it up; the value resides not only in the underlying technology, but in the metadata, and the community that powers it.

If you know of any other sources of ready-to-use appliances that are integrated into a privatly deployable virtualization solution, please do tell !

Edit 29.04.2015: since writing this article, i found out about DigitalOcean (20 templates), Yunohost (20 apps), (30 apps), Ubos (9 apps).

Also, Google announced Cloud Launcher (130 templates, both VMs and containers, most provided by Bitnami, only 18 by Google); looking into the Amazon AWS marketplace shows 281 Bitnami provided solutions out of over 2000 products. AWS or Google Cloud Launcher cannot be included in this roundup which focuses on privately deployable (self-hosted) solutions, but demonstrates that the public cloud integration is probably the fuel that can fund the development and support of self-hosting solutions.


Get every new post delivered to your Inbox.