LIRC on a Raspberry Pi for a silver Apple TV remote

The idea here is to control a full-screen chromium webpage via a simple remote control (I’m using it for full-screen TV-like prototypes). It’s very straightforward really, but

  • I couldn’t find the right kind of summary of LIRC, the linux infrared remote tool
  • It look me a while to realise that the receiver end of IR was so easy if you have a GPIO (as on a Raspberry Pi) rather than using USB
  • Chromium 51 is now the default on Raspberry Pi’s Jessie, making it easy to do quite sophisticated web interfaces on the Pi3

The key part is the mapping between what LIRC (or its daemon version) does when it hears a keypress and what you want it to do. Basically there’s

It’s all a bit confusing as we’re using remote key presses mapped to the nearest keyboard commands in HTML in Javascript. But it works perfectly.

I’ve provided an example lircd.conf for silver Apple TV remotes, but you can generate your own for any IR remote easily enough.

The way I’ve used xdotool assumes there’s a single webpage open in chromium with a specific title – “lirc-example”.

Full instructions are in github.






Twitter for ESP 8266

I’ve been using the Wemos D1 Mini ESP 8266 for lots of projects, and I like it very much – being an ESP, it’s got wifi, you can use it straightforwardly with Arduino libraries, it has plenty of pins and uses USB micro rather than mini. Dirk introduced me to a nice library to make it broadcast an access point if it can’t get online. There are loads of D1 Minis on ebay.

I wanted to make a tweeting spatula for Jasmine because of her brilliant work on CAKE, the Cook Along Kitchen Experience. Seemed like the obvious thing to do. But Twitter uses OAuth 2.0, which is reasonably complicated and not many people want to code it, especially on such a constrained platform. Most people use some sort of IFTTT type mechanism, connecting accounts on the cloud side, or run a server-side proxy themselves to do the heavy lifting.

But four months ago, Soramimi wrote a twitter client for the ESP, including from-scratch implementations of base64 and sha1! It’s amazing. So that’s what I used. But I barely changed it, just adding the AP library and having it tweet when a gyroscope hit a particular threshold (using the Adafruit sensor library) rather than via a webpage.


I used a small rechargable battery, and some cable webbing to cover it all in a handle  – Jon Tutcher’s idea 🙂

The wiring’s straightforward – here’s a picture from when I was moving from the plugged to a soldered version.


I think maybe an accelerometer might work better – I’m going to try these cheapo ones from Pimoroni next. The only issue I really had (and it was annoying, but there we go) was that I kept running out of memory on the D1 because, it turned out, of this line:

  if(event.gyro.x > 0.7 && event.gyro.y > 0.7 && event.gyro.z > 0.7){

The error is:

../../../xtensa-lx106-elf/bin/ld: Gyro_ESP_Twitter.ino.elf section `.text' will not fit in region `iram1_0_seg'
collect2: error: ld returned 1 exit status
exit status 1
Error compiling for board WeMos D1 R2 & mini.

I was convinced it was all the libraries I was using, but no. The line corrected to

 const float threshold = 0.7;
 if(event.gyro.x > threshold && event.gyro.y > threshold && event.gyro.z > threshold){

worked perfectly. I guess otherwise you get 3 x float created for every loop. Oops 🙂


A speaking camera using Pi3 and Tensorflow

Danbri made one of these and I was so impressed I had a go myself, with a couple of tweaks. It’s very easy to do. He did all the figuring out what needed to be done – there’s something similar here which did the rounds recently. Others have done the really heavy lifting – in particular, making tensorflow work on the Pi.

Barnoid has done lovely things with a similar system but cloud based setup for his Poetoid Lyricam – he used a captioner similar to this one that isn’t quite on a pi with python hooks yet (but nearly) (Barnoid update – he used Torch with neuraltalk2.

The gap between taking a photo and it speaking is 3-4 seconds. Sometimes it seems to cache the last photo. It’s often wrong 🙂

I used USB audio and a powerfulish speaker. A DAC would also be a good idea.


Image the pi and configure

diskutil list
diskutil unmountDisk /dev/diskN
sudo dd bs=1m if=~/Downloads/2016-09-23-raspbian-jessie.img of=/dev/rdiskN

log in to the pi, expand file system, enable camera

sudo raspi-config

optionally, add in usb audio or a DAC



install pico2wave (I tried espeak but it was very indistinct)

sudo pico /etc/apt/sources.list

# Uncomment line below then 'apt-get update' to enable 'apt-get source'
deb-src jessie main contrib non-free rpi

sudo apt-get update
sudo apt-get install fakeroot debhelper libtool help2man libpopt-dev hardening-wrapper autoconf
sudo apt-get install automake1.1 # requires this version
mkdir pico_build
cd pico_build
apt-get source libttspico-utils
cd svox-1.0+git20130326 
dpkg-buildpackage -rfakeroot -us -uc
cd ..
sudo dpkg -i libttspico-data_1.0+git20130326-3_all.deb
sudo dpkg -i libttspico0_1.0+git20130326-3_armhf.deb
sudo dpkg -i libttspico-utils_1.0+git20130326-3_armhf.deb


sudo apt-get install mplayer
pico2wave -w test.wav "hello alan" | mplayer test.wav

install tensorflow on raspi

sudo apt-get install python-pip python-dev
sudo pip install tensorflow-0.10.0-cp27-none-linux_armv7l.whl
install prerequisitites for
git clone # takes ages
sudo pip install imutils picamera 
sudo apt-get install python-opencv


cd /home/pi/tensorflow/tensorflow/models/image/imagenet

install danbri / my hacked version of

curl -O ""





Machine learning links

[work in progress – I’m updating it gradually]

Machine Learning

Google Apologizes After Photos App Autotags Black People as ‘Gorillas’ – a very upsetting and embarrassing misclassification. Flickr’s system did the same thing but in a less visible way.

How Vector Space Mathematics Reveals the Hidden Sexism in Language – very interesting work analysing Word2vec, and particularly their mechanisms for fixing the problem

There is a blind spot in AI research  Kate Crawford and Ryan Calo, Nature, October 2015 – a call for “A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties”

a ProPublica investigation in May 2016 found that the proprietary algorithms widely used by judges to help determine the risk of reoffending are almost twice as likely to mistakenly flag black defendants than white defendants

As a first step, researchers — across a range of disciplines, government departments and industry — need to start investigating how differences in communities’ access to information, wealth and basic services shape the data that AI systems train on.

Maciej Cegłowski – SASE Panel – Maciej on why not being able to understand the mechanisms by which ML systems come to their results is problematic, or as he puts it

“Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias.”

All it takes to steal your face is a special pair of glasses – report on a paper experimentally tricking a commercial face recognition system into misidentifying  people as specific individuals. Depends on a feature of some DNNs that means that small perturbations in an image can produce misclassifications – as described in the next paper:

Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network’s prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.


Filter bubbles

How the Internet Is Loosening Our Grip on the Truth – 

In a recent Pew Research Center survey, 81 percent of respondents said that partisans not only differed about policies, but also about “basic facts.”


Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

The spreading of misinformation onlineDel Vicario, Michela and Bessi, Alessandro and Zollo, Fabiana and Petroni, Fabio and Scala, Antonio and Caldarelli, Guidoand Stanley, H. Eugene and Quattrociocchi, Walter  Proceedings of the National Academy of Sciences, 113 (3). pp. 554-559. ISSN 1091-6490 (2016)


Many mechanisms cause false information to gain acceptance, which in turn generate false beliefs that, once adopted by an individual, are highly resistant to correction.


Our findings show that users mostly tend to select and share content related to a specific narrative and to ignore the rest. In particular, we show that social homogeneity is the primary driver of content diffusion, and one frequent result is the formation of homogeneous, polarized clusters.

The End of the Echo Chamber  – , Feb 2012. Summary of Facebook’s large-scale experiments in 2010 with selective removal of links on EdgeRank (fb newsfeed display algo).

If an algorithm like EdgeRank favors information that you’d have seen anyway, it would make Facebook an echo chamber of your own beliefs. But if EdgeRank pushes novel information through the network, Facebook becomes a beneficial source of news rather than just a reflection of your own small world.


… it doesn’t address whether those stories differ ideologically from our own general worldview. If you’re a liberal but you don’t have time to follow political news very closely, then your weak ties may just be showing you lefty blog links that you agree with—even though, under Bakshy’s study, those links would have qualified as novel information

What’s wrong with Big Data – Some interesting examples – pharmacology and chess, but overall argument a bit unclear.


Artificial Intelligence Is Helping The Blind To Recognize Objects

UK Hospitals Are Feeding 1.6 Million Patients’ Health Records to Google’s AI

Speak, Memory – When her best friend died, she rebuilt him using artificial intelligence – a chatbot version of a real person based on chatlogs. You could probably do that with me…


A presence robot with Chromium, WebRTC, Raspberry Pi 3 and EasyRTC

Update – I’ve been doing more experiments with WebRTC on the Pi – latest is here. Many of the instructions below are still valid though.

Here’s how to make a presence robot with Chromium 51, WebRTC, Raspberry Pi 3 and EasyRTC. It’s actually very easy, especially now that Chromium 51 comes with Raspian Jessie, although it’s taken me a long time to find the exact incantation.

If you’re going to use it for real, I’d suggest using the Jabra 410 speaker / mic. I find that audio is always the most important part of a presence robot, and the Jabra provides excellent sound for a meeting of 5 – 8 people and will work for meetings with larger groups too. I’ve had the most reliable results using a separate power supply for the Jabra, via a powered hub. The whole thing still occasionally fails, so this is a work in progress. You’ll need someone at the other end to plug it in for you.

I’ve had fair success with a “portal” type setup with the Raspberry Pi touchscreen, but it’s hard to combine the Jabra and the screen in a useful box.


As you can see, the current container needs work:


Next things for me will be some sort of expressivity and / or movement. Tristan suggests emoji. Tim suggests pipecleaner arms. Henry’s interested more generally in emotion expressed via movement. I want to be able to rotate. All can be done via the WebRTC data channel I think.

You will need

  • Raspberry Pi 3 + SD card + 2.5A power supply
  • Jabra Mic
  • Powered USB hub (I like this one)
  • A pi camera – I’ve only tested it with a V1
  • A screen (e.g. this TFT)
  • A server, e.g a Linode, running Ubuntu 16 LTS. I’ve had trouble with AWS for some reason, possibly a ports issue.


Set up the Pi

(don’t use jessie-lite, use jessie)

diskutil list
diskutil unmountDisk /dev/diskN
sudo dd bs=1m if=~/Downloads/2016-09-23-raspbian-jessie.img of=/dev/rdiskN

Log in.

sudo raspi-config

expand file system, enable camera (and spi if using a TFT) and boot to desktop, logged in

Update everything

sudo apt-get update && sudo apt-get upgrade

Set up wifi

 sudo pico /etc/wpa_supplicant/wpa_supplicant.conf

Add drivers

sudo pico /etc/modules

Add V4l2 video drivers (for Chromium to pick up the camera): argh

sudo nano /etc/modprobe.d/bcm2835-v4l2.conf
options bcm2835-v4l2 gst_v4l2src_is_broken=1

Argh: USB audio

sudo pico /boot/config.txt 

#dtparam=audio=on ## comment this out
sudo pico /lib/modprobe.d/aliases.conf
#options snd-usb-audio index=-2 # comment this out
sudo pico ~.asoundrc
defaults.pcm.card 1;
defaults.ctl.card 0;

Add mini tft screen (see )

curl -O
tar -zxvf LCD-show-160811.tar.gz
cd LCD-show/
sudo ./LCD35-show

Rename the bot

sudo pico /etc/hostname
sudo pico /etc/hosts

You may need to enable camera again via sudo raspi-config

Add autostart

pico ~/.config/lxsession/LXDE-pi/autostart
@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
@xscreensaver -no-splash
@xset s off
@xset -dpms
@xset s noblank
#@v4l2-ctl --set-ctrl=rotate=270 # if you need to rotate the camera picture
@/bin/bash /home/pi/
#@rm -rf /home/pi/.config/chromium/
/usr/bin/chromium-browser --kiosk --disable-infobars --disable-session-crashed-bubble --no-first-run https://your-server:8443/bot.html#$myrandom &

Assemble everything:

  • Connect the USB hub to the Raspberry Pi
  • Connect the Jabra to the USB hub
  • Attach the camera and TFT screen

On the server

Add keys for login

mkdir ~/.ssh
chmod 700 ~/.ssh
pico ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Install and configure Apache (I used this guide for letsencrypt)

sudo apt-get install apache2
sudo mkdir -p /var/www/your-server/public_html
sudo chown -R $USER:$USER /var/www/your-server/public_html
sudo chmod -R 755 /var/www
nano /var/www/your-server/public_html/index.html
sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/your-server.conf
sudo nano /etc/apache2/sites-available/your-server.conf
<VirtualHost *:80>     
        ServerAdmin webmaster@localhost
        ServerName your-server
        ServerAlias your-server
        ErrorLog ${APACHE_LOG_DIR}/your-server_error.log
        CustomLog ${APACHE_LOG_DIR}/your-server_access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} = your-server
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
sudo a2ensite your-server.conf
sudo service apache2 reload
sudo service apache2 restart

Add certs

You can’t skip this part – Chrome and Chromium won’t work without https

sudo apt-get install git
sudo git clone /opt/letsencrypt
cd /opt/letsencrypt
./letsencrypt-auto --apache -d your-server
/opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
sudo /opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
sudo mkdir /var/log/
sudo mkdir /var/log/lets-encrypt

Auto-renew certs

sudo /opt/letsencrypt/letsencrypt-auto renew >> /var/log/lets-encrypt/le-renew.log
crontab -e
# m h  dom mon dow   command
30 2 * * 1 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/lets-encrypt/le-renew.log

Get and install the EasyRTC code

Install node

curl -sL | sudo bash -

sudo apt-get install -y nodejs

Install the easyrtc api

cd /var/www/your-server/
git clone

Replace the server part with my version

cd server
rm -r *
git clone
cd ..
sudo npm install

Run the node server

nohup node server.js &


Boot up the pi, and on your other machine go to


in Chrome.

When the Pi boots up it should go into full screen Chromium at https://your-server:8443/bot.html  – there should be a prompt to accept the audio and video on the pi – you need to accept that once and then it’ll work.


Camera light doesn’t go on

Re-enable the camera using

sudo raspi-config

No video

WebRTC needs a lot of ports open. With this config we’re just using some default STUN and TURN ports. On most wifi networks it should work, but on some restricted or corporate networks you may have trouble. I’ve not tried running my own TURN servers, which in theory would help with this.

No audio

I find linux audio incredibly confusing. The config above is based around this answer. YMMV especially if you have other devices attached.