Walls Have Eyes goes to the Design Museum

We made this initially as a post for a presentation at work, but it doesn’t seem quite right for a work blogpost (though we will do one for that too) but it seems a shame for it not to be public.

The context is this: a fairly quick hack Andrew Nicolaou, Jasmine Cox and I made for Mozfest got nominated for Design of the Year 2015, and so we redesigned it so that it would last five months in place at the museum as part of their exhibition (we hoped; we’ve had some teething problems).

This was something that’s completely new to me and Andrew, though Jasmine is much more experienced at making these kinds of things.

This is the story of us setting it all up.

Jasmine took most of the photos.

As Andrew pointed out, it’s come out a bit like a Peter and Jane book.

We initially made Walls Have Eyes very quickly as part of Ian and Jasmine’s Ethical Dilemma Cafe

The combination of electronics in innocuous frames


and an extremely noisy dot matrix printer


and an updating html output from the cameras, meant that it got the message across quite well


Then we were unexpectedly nominated for the Designs of the Year, which meant we had to build something that lasts 5 months.

So we needed to redesign it a bit and improve the code


It was going to be on a wall rather than in an ambient cafe environment, so it needed a trigger, to make the experience more immediate, like this ultrasonic sensor

It needed wired networking rather than wifi for reliability, and we needed to test it intensively


so Andrew and Libby rewrote the code (mostly Andrew).


Andrew designed and laser cut some beautiful glowing fittings for the frames


Andrew, Dan and Libby tested it at QCon, including creating a ‘surveillance owl’ fitting for the sensor


and working through a load of issues


By thursday we had all the bits more or less working in the kitchen


On Friday morning we took it all to the Design Museum, realising in the process that we needed better bags


At the muesum, this was the first time we’d put the Raspberry Pis in the frames


and consequently that took a while



Then placement took even longer


and involved drilling


and pondering


and threading wires through holes


We didn’t quite get it ready by the end of friday and had a mad dash to get trains, punctuated by Libby taking pictures of Tower Bridge


On Monday, Andrew did a very slow, stressful dash across London through the roadworks to pick up some postcards and sort out the networking so we could debug remotely.

Then on Tuesday we all went over to do some final tweaks


and debugging.


Then, finally, the party started.


and it was working!


and people were looking at it!


so we had a small beer.


and it works still…


…although yesterday we had to do a little fix.


A little stepper motor

I want to make a rotating 3D-printed head-on-a-spring for my podcast in a box (obviously). Last week I had a play with a tiny servo motor that Richard Sewell had given me ages ago, and discovered the difference between steppers, servos and DC motors. Servos can be modified to go all the way round – they don’t usually, as I found out last week – but modifying the 9g looked a bit hard. Meanwhile John Honniball kindly gave me a little Switec type stepper motor and some excellent links.

Getting it to work didn’t take too long – the code’s below, and there’s a simple Arduino library to make it work (on github) and some instructions to wire it up, including tips on getting the wires attached. I had to prise it apart to make it rotate continuously, but that was easy in this case (the one I had didn’t even have screws, just clips). I haven’t got a 3D-printed head yet, just a bit of wire, foil and foam sheet in its place (on a spring though).

Exciting video: https://www.flickr.com/photos/nicecupoftea/16759666917/


#include <SwitecX25.h>
// standard X25.168 range 315 degrees at 1/3 degree steps
#define STEPS (315*3)

SwitecX25 motor(STEPS, 4,5,6,7); 
int nLoops = 10;
int currentStep = 0;
int count = 0;
int state = 1;

void setup(void)
void loop() {
   char c = Serial.read();
   if(c == '0'){
     state = 0;
   if(c == '1'){
     state = 1;
   if(state == 1){
     motor.currentStep = 0;         // reset origin on each rotation 
     motor.setPosition(STEPS*2);    // set target to way past end of rotation
     while (motor.currentStep<STEPS-1){
       motor.update();  // turn until rotation is complete

Tiny micro servo 9g and Duemilanove w/ ATmega168

It’s eons since I messed with an Arduino, and I’ve forgotten it all. All I have handy is a Duemilanove with ATmega168 and the newish version of the Arduino IDE doesn’t have Duemilanove ATmega168 as an option. Following this otherwise perfect tutorial and with IDE 1.6.1 (which works even though it’s for Java 6 and I have Java 8 on this mac), you get this error if you choose just Duemilanove:

avrdude: stk500_recv(): programmer is not responding
avrdude: stk500_getsync() attempt 1 of 10: not in sync: resp=0x00
Problem uploading to board.  See http://www.arduino.cc/en/Guide/Troubleshooting#upload for suggestions.
  This report would have more information with
  "Show verbose output during compilation"
  enabled in File > Preferences.

But it’s fine if you choose “Arduino NG or older” in tools > board, like this:


The other difference is that these days, you don’t have to reboot after installing the FTDI drivers.

Once I have blink worked, I moved to this servo tutorial, since I have a tiny micro servo 9g from somewhere (Richard?).


In that tutorial it says you need to download and add the servo library; in fact it’s already there in “sketch > import library”.

Initially I got

Arduino: 1.6.1 (Mac OS X), Board: "Arduino NG or older, ATmega168"

Build options changed, rebuilding all
servo1.ino: In function 'void loop()':
servo1.ino:46:3: error: 'refresh' is not a member of 'Servo'
Error compiling.

  This report would have more information with
  "Show verbose output during compilation"
  enabled in File > Preferences.

The tutorial is evidently very out of date with respect to the servo library – this fixed it (and hi Anachrocomputer in that thread!)

From the processing code in that tutorial:

//Output the servo position ( from 0 to 180)

using the serial monitor (set to 19200 baud rate) I should be able to input “s100” in the serial monitor and have the servo move. But nothing happened. So I put some more logging in, and saw garbled stuff or nothing, and I wondered if my arduino is too old to do that baud rate. Doing this it reverses “foo” to “oof” but who knows.

Anyway, a few things:
firstly the code refers to the second servo, so the pin needs to be in analogue 2 not 1?
secondly, the serial monitor seems happier with line endings enabled
thirdly, and the actual problem turns out I had the pin wrong – that tutorial is very out of date, or the library’s changed a lot, or something. What you need is one of these, e.g digital 9.

Here’s the way I got it going.

Use this code:

// Sweep
// This example code is in the public domain.
Servo myservo;  // create servo object to control a servo
                // a maximum of eight servo objects can be created
int pos = 0;    // variable to store the servo position
void setup()
  myservo.attach(9);  // attaches the servo on pin 9 to the servo object

void loop()
  for(pos = 0; pos < 180; pos += 1)  // goes from 0 degrees to 180 degrees
  {                                  // in steps of 1 degree
    myservo.write(pos);              // tell servo to go to position in variable 'pos'
    delay(15);                       // waits 15ms for the servo to reach the position
  for(pos = 180; pos>=1; pos-=1)     // goes from 180 degrees to 0 degrees
    myservo.write(pos);              // tell servo to go to position in variable 'pos'
    delay(15);                       // waits 15ms for the servo to reach the position

and the digital 9 pin as in the diagram on that page. Arduino pin naming seems nearly as confusing as the Raspberry Pi’s.

I think this little servo might be a bit noisy for what I have in mind, but it’s nice to have it working.


Catwigs, printing and boxes

Catwigs are a set of cards that help you interrogate your project and are described in detail here. This post is about how we got them made and how we put them in a box. Not everyone’s cup of tea perhaps.

At the start, we copied Giles’ use of Moo for his alpha “strategies for people like me” cards and had our catwigs printed at Moo, who print business cards, among other things, and are very reasonable. There are 25 catwigs in a set (analogies are pretty difficult to think up), so this posed us a problem: the Moo boxes (and most other business card boxes) were far too deep, as they’re all designed for 50 cards. In any case, Moo only provide a few boxes for any given delivery of cards. So what could we put them in that would fit correctly, be robust enough to carry round, but readily buyable or makeable at a reasonable price? It’s proved a surprisingly difficult problem.

First I put them in a slip of cardboard, inspired by some simple cardboard business card holders from Muji, using a bobble to secure them. They did the job (especially decorated with a round sticker from Moo) but we wanted something a bit more robust. I had a brief detour looking into custom-printed elastic, but didn’t get anywhere, sadly.


I started to investigate different types of boxes, and made a few prototypes from Richard’s big book of boxes.


The problem with these were that they all involved glueing and were quite fiddly to make, so a bit impractical for more than a few.

At this point we started to look into tins. Tins are cheap, readily available and robust. However, THEY ARE ALL THE WRONG SHAPE.

Our cards are “standard” business card size (there isn’t really a standard size but this is a common one). The closest tin size is this one (95 x 65 x 21mm) which is fractionally too small when you take into account the cuve of the corners, such that the cards get a bit stuck when you try and get them out. I had an entertaining time (not entertaining) cleaning off accidental Cyanoacrylate-fumed fingerprints caused by sticking in ribbons so that they could be more easily removed.


This tobacco tin isn’t a bad size but the lid comes off too easily and it’s the wrong colour

These are near-perfect (found thanks to Dirk) but cannot be ordered from outside the Netherlands.

There are plenty of tins from Alibaba, but it’s very hard to know whether they will fit, because of the curved corner issue. We could also have had bespoke tins made but not in small numbers.

At this point we looked into personalised playing and game card printers. It turns out there are lots of these – and this is what Giles went with his Blockbox cards in the end (from MakePlayingcards). Here are some possibilities (that last one lets you publish your own games). They’re all in the US however, which adds in a layer of complexity when everything gets stuck in customs.

We did order some at a very reasonable price from Spingold, but we ran into the box problem again – they’re happy to put fewer cards in the box, but custom boxes are more problematic. The cards themselves shuffle better than the Moo ones, but I prefer the matt texture of the Moo cards, and also that I can keep stock low and just order as many as I need.



(One important thing to take account of is that if the depth of your box is close to 2.5cm then it’s £3.20 rather than “large letter” £0.93 to send them first class by Royal Mail which is both annoying and wasteful.)

Anyway this is our current box design for these beta cards, also inspired by one in Structural Packaging Design. I’ve been cutting and scoring them on a laser cutter from 320 gsm “frost” polypropelene (from Paperchase). They are reasonably robust, easy to put together and the correct size, so I think they’ll do for now.


Notes on a TV-to-Radio prototype

In the last couple of weeks at work we’ve been making “radios” in order to test the Radiodan prototyping approach. We each took an idea and tried to do a partial implementation. Here’s some notes about mine. It’s not really a radio.


Make a TV that only shows the screen when someone is looking, else just play sound.

This is a response to the seemingly-pervasive idea that the only direction for TV is for devices and broadcasts (and catchup) to get more and more pixels. Evidence suggests that sometimes at least, TV is becoming radio with a large screen that is barely – but still sometimes – paid attention to.

There are a number of interesting things to think about once you see TV and radio as part of a continuum (like @fantasticlife does). One has to do with attention availability, and specifically the usecases that overlap with bandwidth availability.

As I watch something on TV and then move to my commuting device, why can’t it switch to audio only or text as bandwidth or attention allows? If it’s important for me to continue then why not let me continue in any way that’s possible or convenient? My original idea was to do a demo showing that as bandwidth falls, you get less and less resolution for the video, then audio (using the TV audio description track?) then text (then a series of vibrations or lights conveying mood?). There are obvious accessibility applications for this idea.

Then pushing it further, and to make a more interesting demo – much TV is consumed while doing something else. Most of the time you don’t need the pictures, only when you are looking. So can we solve that? Would it be bad if we did? what would it be like?

SO: my goal is a startling demo of mixed format content delivery according to immediate user needs as judged by attention paid to the screen.


I’d also like to make some sillier variants:

  • Make a radio that is only on when someone is looking.
  • Make a tv that only shows the screen when someone is not looking.

I got part of the way – I can blank a screen when someone is facing it (or the reverse). Some basic code is here. Notes on installing it are below. It’s a slight update to this incredibly useful and detailed guide. The differences are that the face recognition code is now part of openCV, which makes it easier, and I’m not interested in whose face it is, only that there is one (or more than one), which simplifies things a bit.

I got as far as blanking the screen when it detects a face. I haven’t yet got it to work with video, I think it has potential though.

Implementation notes

My preferred platform is the Raspberry Pi, because then I can leave it places for demos, make several cheaply etc. Though in this case using the Pi caused some problems.

Basically I need
1. something to do face detection
2. something to blank the screen
3. something to play video

1. Face detection turned out to be the reasonably easy part.

A number of people have tried using opencv for face detection on the PI and got it working, either with C++ or using Python. For my purposes, Python was much too slow – about 10 secs to process the video – but C++ was sub-second, which is good enough I think. Details are below.

2. Screen blanking turned out to be much more difficult, especially when combined with C++ and my lack of experience of C++

My initial thought was to do the whole thing in html, e.g. with a full screen browser and just hiding the video element when the face was not present. I’d done tests with Epiphany and knew it could manage html5 video (mp4 streaming). However on my Pi Epiphany point-blank refuses to run – segfaulting every time – and I’ve not worked out what’s wrong (could be that I started with a non-clean Pi img, I’ve not tested it on a clean one yet). Also Epiphany can’t run in kiosk mode, which would probably mean the experience wouldn’t be great. So I looked into lower-level commands that should work to blank the screen whatever is playing.

First (thanks to Andrew’s useful notes) I started looking at CEC, HDMI-level control inteface that allows you do to things like switch on a TV. Raspberry Pi uses this, also DVD plays and the like. As far as I can tell however, there are no CEC commands to blank the screen (although you can switch the TV off, which isn’t what I want as it’s too slow).

There’s also something called DPMS, which allows you to put a screen in hibernation and other modes for HDMI and DVI too, I think. On linux systems you can set it using xset; however, it only seems to work once. I lost a lot of time on this. DPMS is pretty messy and hard to debug. After much strugging, and thinking it was my C++ / system calls that were the problem, replacing them with an X library and finally making shell scripts and small C+ testcases, I figured it was just a bug.

Just for added fun, there’s no vbetool on the Pi, which would have been another method;
and TVService is too heavyweight.

I finally got it to work with xset s, which makes a (blank?) screensaver come on, and is near-instant, though that means I have to run X.

xset s activate #screen off
xset s reset #screen on

3. Something to play video

This is as far as I’ve got: I need to do a clean install and try Epiphany again, and also work out why OMXPlayer seems to cancel the screensaver somehow.



Installing opencv on a pi

Face recognition and opencv

Python camera / face detection


Best guide on power save / screen blanking

DPMS example code






Samsung tv hacking

Run browser on startup

Notes on installing and running opencv on a pi with video

1. sudo raspi-config
expand file system
enable camera

2. sudo apt-get update && sudo apt-get -y upgrade

3. cd /opt/vc
sudo git clone https://github.com/raspberrypi/userland.git

sudo apt-get install cmake

cd /opt/vc/userland 
sudo chown -R pi:pi .
sed -i 's/DEFINED CMAKE_TOOLCHAIN_FILE/NOT DEFINED CMAKE_TOOLCHAIN_FILE/g' makefiles/cmake/arm-linux.cmake

sudo mkdir build
cd build
sudo cmake -DCMAKE_BUILD_TYPE=Release ..
sudo make
sudo make install

Go to /opt/vc/bin and test one file typing : ./raspistill -t 3000

4. install opencv

sudo apt-get install libopencv-dev python-opencv

mkdir camcv
cd camcv
cp -r /opt/vc/userland/host_applications/linux/apps/raspicam/*  .

curl https://gist.githubusercontent.com/libbymiller/13f1cd841c5b1b3b7b41/raw/b028cb2350f216b8be7442a30d6ee7ce3dc54da5/gistfile1.txt > camcv_vid3.cpp

edit CMakeLists.txt:

cmake_minimum_required(VERSION 2.8)
project( camcv_vid3 )
find_package( OpenCV REQUIRED )

add_executable(camcv_vid3 RaspiCamControl.c RaspiCLI.c RaspiPreview.c camcv_vid3.cpp
RaspiTex.c RaspiTexUtil.c
gl_scenes/teapot.c gl_scenes/models.c
gl_scenes/square.c gl_scenes/mirror.c gl_scenes/yuv.c gl_scenes/sobel.c tga.c)
target_link_libraries(camcv_vid3 /opt/vc/lib/libmmal_core.so

cmake .