Category Archives: Uncategorized

A simple Raspberry Pi-based picture frame using Flickr

I made this Raspberry Pi picture frame – initially with a screen – as a present for my parents for their wedding anniversary. After user testing, I realised that what they really wanted was a tiny version that they could plug into their TV, so I recently made a Pi Zero version to do that.

It uses a Raspberry Pi 3 or Pi Zero with full-screen Chromium. I’ve used Flickr as a backend: I made a new account and used their handy upload-by-email function (which you can set to make uploads private) so that all members of the family can send pictures to it.

frame

I initially assumed that a good frame would be ambient – stay on the same picture for say, 5 minutes, only update itself once a day, and show the latest pictures in order. But user testing – and specifically an uproarious party where we were all uploading pictures to it and wanted to see them immediately – forced a redesign. It now updates itself every 15 minutes, uses the latest two plus a random sample from all available pictures, shows each picture for 20 seconds, and caches the images to make sure that I don’t use up all my parents’ bandwidth allowance.

The only tricky technical issue is finding the best part of the picture to display. It will usually be a landscape display (although you could configure a Pi screen to be vertical), so that means you’re either going to get a lot of black onscreen or you’ll need to figure out a rule of thumb. On a big TV this is maybe less important. I never got amazing results, but I had a play with both heuristics and face detection, and both moderately improved matters.

It’s probably not a great deal different to what you’d get in any off the shelf electronic picture frame, but I like it because it’s fun and easy to make, configurable and customisable. And you can just plug it into your TV. You could make one for several members of a group or family based on the same set of pictures, too.

Version 1: Pi3, official touchscreen (you’ll need a 2.5A power supply to power them together), 8GB micro SD card, and (if you like) a ModMyPi customisable Pi screen stand.

Version 2: Pi Zero, micro USB converter, USB wifi, mini HDMI converter, HDMI cable, 8GB micro SD card, data micro USB cable, maybe a case.

The Zero can’t really manage the face detection, though I’m not convinced it matters much.

It should take < 30 minutes.

The code and installation instructions are here.

 

 

LIRC on a Raspberry Pi for a silver Apple TV remote

The idea here is to control a full-screen chromium webpage via a simple remote control (I’m using it for full-screen TV-like prototypes). It’s very straightforward really, but

  • I couldn’t find the right kind of summary of LIRC, the linux infrared remote tool
  • It look me a while to realise that the receiver end of IR was so easy if you have a GPIO (as on a Raspberry Pi) rather than using USB
  • Chromium 51 is now the default on Raspberry Pi’s Jessie, making it easy to do quite sophisticated web interfaces on the Pi3

The key part is the mapping between what LIRC (or its daemon version) does when it hears a keypress and what you want it to do. Basically there’s

It’s all a bit confusing as we’re using remote key presses mapped to the nearest keyboard commands in HTML in Javascript. But it works perfectly.

I’ve provided an example lircd.conf for silver Apple TV remotes, but you can generate your own for any IR remote easily enough.

The way I’ve used xdotool assumes there’s a single webpage open in chromium with a specific title – “lirc-example”.

Full instructions are in github.

ir_pi3

Links

lirc

xdotool

irexec

A speaking camera using Pi3 and Tensorflow

Danbri made one of these and I was so impressed I had a go myself, with a couple of tweaks. It’s very easy to do. He did all the figuring out what needed to be done – there’s something similar here which did the rounds recently. Others have done the really heavy lifting – in particular, making tensorflow work on the Pi.

Barnoid has done lovely things with a similar system but cloud based setup for his Poetoid Lyricam – he used a captioner similar to this one that isn’t quite on a pi with python hooks yet (but nearly) (Barnoid update – he used Torch with neuraltalk2.

The gap between taking a photo and it speaking is 3-4 seconds. Sometimes it seems to cache the last photo. It’s often wrong 🙂

I used USB audio and a powerfulish speaker. A DAC would also be a good idea.

Instructions

Image the pi and configure

diskutil list
diskutil unmountDisk /dev/diskN
sudo dd bs=1m if=~/Downloads/2016-09-23-raspbian-jessie.img of=/dev/rdiskN

log in to the pi, expand file system, enable camera

sudo raspi-config

optionally, add in usb audio or a DAC

Test

speaker-test

install pico2wave (I tried espeak but it was very indistinct)

sudo pico /etc/apt/sources.list

# Uncomment line below then 'apt-get update' to enable 'apt-get source'
deb-src http://archive.raspbian.org/raspbian/ jessie main contrib non-free rpi

sudo apt-get update
sudo apt-get install fakeroot debhelper libtool help2man libpopt-dev hardening-wrapper autoconf
sudo apt-get install automake1.1 # requires this version
mkdir pico_build
cd pico_build
apt-get source libttspico-utils
cd svox-1.0+git20130326 
dpkg-buildpackage -rfakeroot -us -uc
cd ..
sudo dpkg -i libttspico-data_1.0+git20130326-3_all.deb
sudo dpkg -i libttspico0_1.0+git20130326-3_armhf.deb
sudo dpkg -i libttspico-utils_1.0+git20130326-3_armhf.deb

test

sudo apt-get install mplayer
pico2wave -w test.wav "hello alan" | mplayer test.wav

install tensorflow on raspi

sudo apt-get install python-pip python-dev
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/raw/master/bin/tensorflow-0.10.0-cp27-none-linux_armv7l.whl
sudo pip install tensorflow-0.10.0-cp27-none-linux_armv7l.whl
install prerequisitites for classify_image.py
git clone https://github.com/tensorflow/tensorflow.git # takes ages
sudo pip install imutils picamera 
sudo apt-get install python-opencv

test

cd /home/pi/tensorflow/tensorflow/models/image/imagenet

install danbri / my hacked version of classify_image.py

mv classify_image.py classify_image.py.old
curl -O "https://gist.githubusercontent.com/libbymiller/afb715ac53dcc7b85cd153152f6cd75a/raw/2224179cfdc109edf2ce8408fe5e81ce5a265a6e/classify_image.py"

run

python classify_image.py

done!

 

Machine learning links

[work in progress – I’m updating it gradually]

Machine Learning

Google Apologizes After Photos App Autotags Black People as ‘Gorillas’ – a very upsetting and embarrassing misclassification. Flickr’s system did the same thing but in a less visible way.

How Vector Space Mathematics Reveals the Hidden Sexism in Language – very interesting work analysing Word2vec, and particularly their mechanisms for fixing the problem

There is a blind spot in AI research  Kate Crawford and Ryan Calo, Nature, October 2015 – a call for “A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties”

a ProPublica investigation in May 2016 found that the proprietary algorithms widely used by judges to help determine the risk of reoffending are almost twice as likely to mistakenly flag black defendants than white defendants

As a first step, researchers — across a range of disciplines, government departments and industry — need to start investigating how differences in communities’ access to information, wealth and basic services shape the data that AI systems train on.

Maciej Cegłowski – SASE Panel – Maciej on why not being able to understand the mechanisms by which ML systems come to their results is problematic, or as he puts it

“Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias.”

All it takes to steal your face is a special pair of glasses – report on a paper experimentally tricking a commercial face recognition system into misidentifying  people as specific individuals. Depends on a feature of some DNNs that means that small perturbations in an image can produce misclassifications – as described in the next paper:

Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network’s prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.

 

Filter bubbles

How the Internet Is Loosening Our Grip on the Truth – 

In a recent Pew Research Center survey, 81 percent of respondents said that partisans not only differed about policies, but also about “basic facts.”

[…]

Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

The spreading of misinformation onlineDel Vicario, Michela and Bessi, Alessandro and Zollo, Fabiana and Petroni, Fabio and Scala, Antonio and Caldarelli, Guidoand Stanley, H. Eugene and Quattrociocchi, Walter  Proceedings of the National Academy of Sciences, 113 (3). pp. 554-559. ISSN 1091-6490 (2016)

 

Many mechanisms cause false information to gain acceptance, which in turn generate false beliefs that, once adopted by an individual, are highly resistant to correction.

[…]

Our findings show that users mostly tend to select and share content related to a specific narrative and to ignore the rest. In particular, we show that social homogeneity is the primary driver of content diffusion, and one frequent result is the formation of homogeneous, polarized clusters.

The End of the Echo Chamber  – , Feb 2012. Summary of Facebook’s large-scale experiments in 2010 with selective removal of links on EdgeRank (fb newsfeed display algo).

If an algorithm like EdgeRank favors information that you’d have seen anyway, it would make Facebook an echo chamber of your own beliefs. But if EdgeRank pushes novel information through the network, Facebook becomes a beneficial source of news rather than just a reflection of your own small world.

[…]

… it doesn’t address whether those stories differ ideologically from our own general worldview. If you’re a liberal but you don’t have time to follow political news very closely, then your weak ties may just be showing you lefty blog links that you agree with—even though, under Bakshy’s study, those links would have qualified as novel information

What’s wrong with Big Data – Some interesting examples – pharmacology and chess, but overall argument a bit unclear.

Applications

Artificial Intelligence Is Helping The Blind To Recognize Objects

UK Hospitals Are Feeding 1.6 Million Patients’ Health Records to Google’s AI

Speak, Memory – When her best friend died, she rebuilt him using artificial intelligence – a chatbot version of a real person based on chatlogs. You could probably do that with me…

 

A presence robot with Chromium, WebRTC, Raspberry Pi 3 and EasyRTC

Here’s how to make a presence robot with Chromium 51, WebRTC, Raspberry Pi 3 and EasyRTC. It’s actually very easy, especially now that Chromium 51 comes with Raspian Jessie, although it’s taken me a long time to find the exact incantation.

If you’re going to use it for real, I’d suggest using the Jabra 410 speaker / mic. I find that audio is always the most important part of a presence robot, and the Jabra provides excellent sound for a meeting of 5 – 8 people and will work for meetings with larger groups too. I’ve had the most reliable results using a separate power supply for the Jabra, via a powered hub. The whole thing still occasionally fails, so this is a work in progress. You’ll need someone at the other end to plug it in for you.

I’ve had fair success with a “portal” type setup with the Raspberry Pi touchscreen, but it’s hard to combine the Jabra and the screen in a useful box.

28988574596_16e6ea7321_k

As you can see, the current container needs work:

28937012764_040f8af3d1_k

Next things for me will be some sort of expressivity and / or movement. Tristan suggests emoji. Tim suggests pipecleaner arms. Henry’s interested more generally in emotion expressed via movement. I want to be able to rotate. All can be done via the WebRTC data channel I think.

You will need

  • Raspberry Pi 3 + SD card + 2.5A power supply
  • Jabra Mic
  • Powered USB hub (I like this one)
  • A pi camera – I’ve only tested it with a V1
  • A screen (e.g. this TFT)
  • A server, e.g a Linode, running Ubuntu 16 LTS. I’ve had trouble with AWS for some reason, possibly a ports issue.

Instructions

Set up the Pi

(don’t use jessie-lite, use jessie)

diskutil list
diskutil unmountDisk /dev/diskN
sudo dd bs=1m if=~/Downloads/2016-09-23-raspbian-jessie.img of=/dev/rdiskN

Log in.

sudo raspi-config

expand file system, enable camera (and spi if using a TFT) and boot to desktop, logged in

Update everything

sudo apt-get update && sudo apt-get upgrade

Set up wifi

 sudo pico /etc/wpa_supplicant/wpa_supplicant.conf
 
 network={
   ssid="foo"
   psk="bar"
 }

Add drivers

sudo pico /etc/modules
i2c-dev
snd-bcm2835
bcm2835-v4l2

Add V4l2 video drivers (for Chromium to pick up the camera): argh

sudo nano /etc/modprobe.d/bcm2835-v4l2.conf
options bcm2835-v4l2 gst_v4l2src_is_broken=1

Argh: USB audio

sudo pico /boot/config.txt 

#dtparam=audio=on ## comment this out
sudo pico /lib/modprobe.d/aliases.conf
#options snd-usb-audio index=-2 # comment this out
sudo pico ~.asoundrc
defaults.pcm.card 1;
defaults.ctl.card 0;

Add mini tft screen (see http://www.spotpear.com/learn/EN/raspberry-pi/Raspberry-Pi-LCD/Drive-the-LCD.html )

curl -O http://www.spotpear.com/download/diver24-5/LCD-show-160811.tar.gz
tar -zxvf LCD-show-160811.tar.gz
cd LCD-show/
sudo ./LCD35-show

Rename the bot

sudo pico /etc/hostname
sudo pico /etc/hosts

You may need to enable camera again via sudo raspi-config

Add autostart

pico ~/.config/lxsession/LXDE-pi/autostart
@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
@xscreensaver -no-splash
@xset s off
@xset -dpms
@xset s noblank
#@v4l2-ctl --set-ctrl=rotate=270 # if you need to rotate the camera picture
@/bin/bash /home/pi/start_chromium.sh
pico start_chromium.sh
#!/bin/bash
myrandom=$RANDOM
#@rm -rf /home/pi/.config/chromium/
/usr/bin/chromium-browser --kiosk --disable-infobars --disable-session-crashed-bubble --no-first-run https://your-server:8443/bot.html#$myrandom &

Assemble everything:

  • Connect the USB hub to the Raspberry Pi
  • Connect the Jabra to the USB hub
  • Attach the camera and TFT screen

On the server

Add keys for login

mkdir ~/.ssh
chmod 700 ~/.ssh
pico ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Install and configure Apache (I used this guide for letsencrypt)

sudo apt-get install apache2
sudo mkdir -p /var/www/your-server/public_html
sudo chown -R $USER:$USER /var/www/your-server/public_html
sudo chmod -R 755 /var/www
nano /var/www/your-server/public_html/index.html
sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/your-server.conf
sudo nano /etc/apache2/sites-available/your-server.conf
   
<VirtualHost *:80>     
        ServerAdmin webmaster@localhost
        ServerName your-server
        ServerAlias your-server
        ErrorLog ${APACHE_LOG_DIR}/your-server_error.log
        CustomLog ${APACHE_LOG_DIR}/your-server_access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} = your-server
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
</VirtualHost>
sudo a2ensite your-server.conf
sudo service apache2 reload
sudo service apache2 restart

Add certs

You can’t skip this part – Chrome and Chromium won’t work without https

sudo apt-get install git
sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
cd /opt/letsencrypt
./letsencrypt-auto --apache -d your-server
/opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
sudo /opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
sudo mkdir /var/log/
sudo mkdir /var/log/lets-encrypt

Auto-renew certs

sudo /opt/letsencrypt/letsencrypt-auto renew >> /var/log/lets-encrypt/le-renew.log
crontab -e
# m h  dom mon dow   command
30 2 * * 1 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/lets-encrypt/le-renew.log

Get and install the EasyRTC code

Install node

curl -sL https://deb.nodesource.com/setup | sudo bash -

sudo apt-get install -y nodejs

Install the easyrtc api

cd /var/www/your-server/
git clone https://github.com/priologic/easyrtc

Replace the server part with my version

cd server
rm -r *
git clone https://github.com/libbymiller/libbybot.git
cd ..
sudo npm install

Run the node server

nohup node server.js &

Finally

Boot up the pi, and on your other machine go to

https://your-server:8443/remote.html

in Chrome.

When the Pi boots up it should go into full screen Chromium at https://your-server:8443/bot.html  – there should be a prompt to accept the audio and video on the pi – you need to accept that once and then it’ll work.

Troubleshooting

Camera light doesn’t go on

Re-enable the camera using

sudo raspi-config

No video

WebRTC needs a lot of ports open. With this config we’re just using some default STUN and TURN ports. On most wifi networks it should work, but on some restricted or corporate networks you may have trouble. I’ve not tried running my own TURN servers, which in theory would help with this.

No audio

I find linux audio incredibly confusing. The config above is based around this answer. YMMV especially if you have other devices attached.