I installed a fresh Arch system on an EFI enabled machine, and wanted to use the systemd bootloader (systemd-boot). While the Arch install guide is pretty much consise (which is good), it took me a few loops to make it work with UEFI/systemd.
The trick was to
- select the UEFI boot mode of the USB install drive (if not, /sys/firmware/efi/efivars is not mounted)
- ensure that a big enough EFI System Partition was present on the root drive (/dev/sda1) — by big enough, i mean that it should be at least 80 or 100 MB big because we’ll store the kernels here
- after pacstrap (but before genfstab, so that /boot will be added to /etc/fstab when running it), copy the /mnt/boot contents somewhere, create a new /mnt/boot directory, mount the EFI System Partition on it, and move the previous /mnt/boot/ contents into the new folder; that way, the kernel(s) will reside on the EFI System Partition itself
- after arch-chroot, install systemd-boot into /boot and create a boot entry
With a little more details:
#1 Create the EFI System Partition
- o (create new GPT table)
- n (create new partition, keep it the first partition and 100 MB big, select EF00 hex code)
- n (create the other partitions as usual)
- w (write)
- format it: mkfs.vfat -F32 /dev/sda1
If you run into the “WARNING: Not enough clusters for a 32 bit FAT!” error, even with the -s2 and -s1 arguments, reboot the system and try again (it worked for me).
#3 Mount /dev/sda1 onto /boot and move the /boot contents into it
cp -R /boot /boot.bak mkdir /boot mount /dev/sda1 /boot mv /boot.bak/* /boot/ rmdir /boot.bak
#4 Install boot loader and create the boot entry
- install bootloader
bootctl --path=/boot install
- identify the root partition (not /boot) uuid using ls -l /dev/disk/by-partuuid/
- create /boot/loader/entries/arch.conf as follows (and update /boot/loader/loader.conf to default to “arch”)
title Arch Linux linux /vmlinuz-linux initrd /intel-ucode.img initrd /initramfs-linux.img options root=PARTUUID=4f66ed9b-f72d-62c1-89a6-7e2f1979a8f6 rootfstype=ext4 add_efi_memmap
The painful part was to copy-paste the partition uuid in the cmdline environment (you can’t use systemd to run sshd until the installation is complete and you rebooted the system — or i missed something here).
NB: the instructions above are for the archlinux-2016.09.03-dual.iso release.
def action(): return Falsetid = gobject.timeout_add(1000, action) gobject.source_remove(tid)
However, if the action was already run, the source_remove will trigger the following warnings in the terminal output:
GLib-CRITICAL **: Source ID XXX was not found when attempting to remove it
The documentation states that “It is a programmer error to attempt to remove a non-existent source.” because “source IDs can be reissued after a source has been destroyed”
If you use gobject to schedule a large number of actions, it means that upon cancelling the actions batch, you will end up with tons of warnings on the terminal for each action that has already been run.
I encountered this issue while developing a python benchmarking script that uses gobject to schedule the execution of simulated users. Before running the mainloop, gobject.timeout_add is used for one-time execution of adding simulated users during the test duration. Every source id is stored inside a list so that upon cancelling, we can batch-cancel them; however, there is no way to tell if a particular action id is still valid, so batch-cancelling is done on all actions (regardless if they have been executed already or not).
A way to fix this (thanks to matplotlib‘s zeroSteiner for the tip) is to get the gobject mainloop context, and check if the gobject action source id is still in the execution queue:
context = ml.get_context() action = context.find_source_by_id(tid) if action and not action.is_destroyed(): gobject.source_remove(tid)
However, the documentation states that “It is a programmer error to attempt to lookup a non-existent source.”, so maybe that’s not the cleanest way. If you know a better one, please let me know.
After a lot of tinkering (videotestsrc, using temporary gdp files in /tmp), i finally found the optimal way to benchmark the pi gl display performance (of course blocksize needs to be adjusted to a raw buffer size).
gst-launch-1.0 filesrc num-buffers=100 location=/dev/zero blocksize=8294400 ! videoparse format=rgba width=1920 height=1080 ! glimagesink sync=false ... Execution ended after 0:00:09.687275649
Then divide the number of buffers (100) by the execution time (9.69), giving 100/9.69 = ~10 fps.
videotestsrc was quite slow, gdpdepay was killing performance (inducing additional copying), and working with raw files in /tmp was a little bit slower (but still the best way to test pre-rendered samples, e.g. for encoder benchmarking). Using /dev/zero allows to generate completely zeroes buffers very cheaply (meaning completely black and transparent images).
Note that Gstreamer 1.4 (raspbian) and 1.6.2 (arch) have the same performance, and results seem to be the same on Pi B+ and 2.
Just uploading raw 1080p video to the GLES context (without displaying) runs at 23 fps, representing the actual memory speed bottleneck (= 1.5 Gbits/s !)
gst-launch-1.0 filesrc num-buffers=100 location=/dev/zero blocksize=8294400 ! videoparse format=rgba width=1920 height=1080 ! glupload ! fakesink
Execution ended after 0:00:04.234214792
Unfortunately, this means that the RaspberryPi isn’t usable for any project requiring 30 frames per second HD display performance using Gstreamer.
The original RaspberryPi has awesome software support for many things, but when things get to multimedia capabilities, apart from the reference implementations (omxplayer, raspivid…), things tend to get messy or inconsistent, requiring you to recompile heavily patched packages.
As an example, i tried to compile the latest Gstreamer OpenGL elements without Xorg support so that it is possible to use OpenGL without the whole Xorg stack, for e.g. kiosk-style uses. Moreover, the current OpenGL/GLX/Xorg stack on the RaspberryPi is not hardware-accelerated (yet), so using GLES using the vendor SDK should offer better performance.
After a lot of fumbling, here’s a quick summary of the steps required to compile it (most of the time spent was to figure out the CFLAGS and LDFLAGS necessary to use the SDK OpenGL headers, and complicated by a “bug/feature” with headers within the raspberrypi official sdk and the fact that Mesa libraries can be used at runtime instead of the rpi sdk):
pacman -S gstreamer base-devel git clone git://anongit.freedesktop.org/gstreamer/gst-plugins-bad cd gst-plugins-bad git checkout -b remotes/origin/1.6 ./autogen.sh ./configure CFLAGS="-I/opt/vc/include -I /opt/vc/include/interface/vcos/pthreads -I /opt/vc/include/interface/vmcs_host/linux/" LDFLAGS="-L/opt/vc/lib" --disable-gtk-doc --disable-opengl --enable-gles2 --enable-egl --disable-glx --disable-x11 --disable-wayland --enable-dispmanx --with-gles2-module-name=/opt/vc/lib/libGLESv2.so --with-egl-module-name=/opt/vc/lib/libEGL.so make [go grab coffee/lunch/...] mkdir -p ~/.local/share/gstreamer-1.0/plugins cp ./ext/gl/.libs/libgstopengl.so ~/.local/share/gstreamer-1.0/plugins
Then, after copying the relevant .so files to ~/.local/share/gstreamer-1.0/plugins, you should finally be able to display video using gstreamer:
GST_GL_WINDOW=dispmanx GST_GL_API=gles2 GST_GL_PLATFORM=egl gst-launch-1.0 videotestsrc ! glimagesink
The environment variables are not even be necessary if only a single platform is supported (which is the case here), so you can even run
gst-launch-1.0 videotestsrc ! glimagesink
I also created an AUR package (gst-plugins-bad-rpi-nox) for easier operation — but that will still take a few hours to complete.
Many thanks to Julien Isorce and Matthew Waters for their help !
(credits to adafruit)
1) Log into the RaspberryPi over ssh and edit /flash/config.txt
mount -o remount,rw /flash echo "dtoverlay=lirc-rpi" >> /flash/config.txt
2) create an autostart script that will restart lircd with appropriate options:
echo "killall lircd; mkdir -p /var/run/lirc/; /usr/sbin/lircd --driver=default --device=/dev/lirc0 --uinput --output=/var/run/lirc/lircd --pidfile=/var/run/lirc/lircd-lirc0.pid /storage/.config/lircd.conf" > /storage/.config/autostart.sh
Let’s make it executable:
chmod +x /storage/.config/autostart.sh
3) Put http://lirc.sourceforge.net/remotes/apple/A1156 into /storage/.config/lircd.conf
wget http://lirc.sourceforge.net/remotes/apple/A1156 -O /storage/.config/lircd.conf
Reboot, and make sure that lircd is running with the options above.
OpenELEC:~/.kodi/userdata # ps | grep lirc 353 root 0:00 /usr/sbin/lircd --driver=default --device=/dev/lirc0 --uinput --output=/var/run/lirc/lircd --pidfile=/var/run/lirc/lircd-lirc0.pid /storage/.config/lircd.conf
4) to check if it is working properly, the irw command should show actual output when you press the buttons:
OpenELEC:~/.config # irw 4e 0 KEY_KPPLUS devinput 4e 1 KEY_KPPLUS devinput 4e 0 KEY_KPPLUS_UP devinput
5) Put the following into /storage/.kodi/userdata/Lircmap.xml (the upper-case L is important)
<lircmap> <remote device="devinput"> <up>KEY_KPPLUS</up> <down>KEY_KPMINUS</down> <left>KEY_REWIND</left> <right>KEY_FASTFORWARD</right> <menu>KEY_MENU</menu> <select>KEY_PLAY</select> </remote> </lircmap>
The important part (that made me scratch my head for quite some time due to broken guides and docs all around) is to match the remote device with what irw is outputting, and map the key names with the ones irw is showing.
In a previous post, i discussed virtualization technologies as seen under the scope of the largest ready-made appliance catalog, comparing containers and image-based techs: OpenVZ, Docker and vmware images.
“Standard”/format-wise, even if Docker clearly had the popularity edge for some time already, launching VMWare images is still the gold standard behind company firewalls; however, the announce from VMWare to support Docker natively through their Open Source Lightwave and Photon projects on April 20th is indiscutably big news: the king of virtualization will support Docker (as well as CoreOS Rocket — LXC under the hood — and Pivotal Garden container formats), which in my opinion is the last nail on the coffin of OpenVZ, and the consecration for Docker.
Shortly before that, Google announced and published Kubernetes on the beginning of April, it’s own Open Source container orchestration solution built on top of CoreOS and Docker (and working on Rocket support). Kubernetes comes with multi-provider support (Azure, AWS, Rackspace, VMWare Vsphere …) and a few app examples (WordPress / MySQL, Celery / RabbitMQ, Cassandra, …).
Last November, AWS added ECS (EC2 Container Service) which is about running Docker images, as did Microsoft announce for Azure, and even Windows Server end of February.
So many announcements at the same time, VMWare adding support for containers with Docker in an Open Source project, and at the same time Google publishing Kubernetes, their next-generation container orchestration software based on Docker even before the scientific publication as well as launching a managed version of Kubernetes, Google Container Engine…
Docker is there to stay !
Moving to a new i7-4771 computer (haswell), i had to connect one of my two monitors using HDMI. Unfortunately, the contrast seemed really low compared to the other (identical) display, exactly as if the brighness had been boosted and contrast lowered. A washed out, dull picture.
What happens is that the Intel GPU i915 driver falls back automatically to a limited colorspace (that is normally supported by flat TVs).
The awesome Arch wiki provided the solution, but does not give a lot of hints on where to deploy the fix.
I ended up putting the following shell script into /etc/X11/xinit/xinitrc.d/50-intel-fix.sh
if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then
/usr/bin/xrandr --output HDMI2 --set "Broadcast RGB" "Full"
(don’t forget to replace “HDMI2” by your own display name, as reported by xrandr).
It’s sad to think that a 2013 released processor/GPU still has this kind of issues. Oh, and did i mention i am using the latest stable kernel/driver stack ?
Edit: another way to fix this is using a DVI-HDMI adapter on the display, it forces the GPU to use proper DVI/RGB with full colorspace depth.