Notes on a TV-to-Radio prototype

In the last couple of weeks at work we’ve been making “radios” in order to test the Radiodan prototyping approach. We each took an idea and tried to do a partial implementation. Here’s some notes about mine. It’s not really a radio.

Idea

Make a TV that only shows the screen when someone is looking, else just play sound.

This is a response to the seemingly-pervasive idea that the only direction for TV is for devices and broadcasts (and catchup) to get more and more pixels. Evidence suggests that sometimes at least, TV is becoming radio with a large screen that is barely – but still sometimes – paid attention to.

There are a number of interesting things to think about once you see TV and radio as part of a continuum (like @fantasticlife does). One has to do with attention availability, and specifically the usecases that overlap with bandwidth availability.

As I watch something on TV and then move to my commuting device, why can’t it switch to audio only or text as bandwidth or attention allows? If it’s important for me to continue then why not let me continue in any way that’s possible or convenient? My original idea was to do a demo showing that as bandwidth falls, you get less and less resolution for the video, then audio (using the TV audio description track?) then text (then a series of vibrations or lights conveying mood?). There are obvious accessibility applications for this idea.

Then pushing it further, and to make a more interesting demo – much TV is consumed while doing something else. Most of the time you don’t need the pictures, only when you are looking. So can we solve that? Would it be bad if we did? what would it be like?

SO: my goal is a startling demo of mixed format content delivery according to immediate user needs as judged by attention paid to the screen.

tv_as_radio

I’d also like to make some sillier variants:

  • Make a radio that is only on when someone is looking.
  • Make a tv that only shows the screen when someone is not looking.

I got part of the way – I can blank a screen when someone is facing it (or the reverse). Some basic code is here. Notes on installing it are below. It’s a slight update to this incredibly useful and detailed guide. The differences are that the face recognition code is now part of openCV, which makes it easier, and I’m not interested in whose face it is, only that there is one (or more than one), which simplifies things a bit.

I got as far as blanking the screen when it detects a face. I haven’t yet got it to work with video, I think it has potential though.

Implementation notes

My preferred platform is the Raspberry Pi, because then I can leave it places for demos, make several cheaply etc. Though in this case using the Pi caused some problems.

Basically I need
1. something to do face detection
2. something to blank the screen
3. something to play video

1. Face detection turned out to be the reasonably easy part.

A number of people have tried using opencv for face detection on the PI and got it working, either with C++ or using Python. For my purposes, Python was much too slow – about 10 secs to process the video – but C++ was sub-second, which is good enough I think. Details are below.

2. Screen blanking turned out to be much more difficult, especially when combined with C++ and my lack of experience of C++

My initial thought was to do the whole thing in html, e.g. with a full screen browser and just hiding the video element when the face was not present. I’d done tests with Epiphany and knew it could manage html5 video (mp4 streaming). However on my Pi Epiphany point-blank refuses to run – segfaulting every time – and I’ve not worked out what’s wrong (could be that I started with a non-clean Pi img, I’ve not tested it on a clean one yet). Also Epiphany can’t run in kiosk mode, which would probably mean the experience wouldn’t be great. So I looked into lower-level commands that should work to blank the screen whatever is playing.

First (thanks to Andrew’s useful notes) I started looking at CEC, HDMI-level control inteface that allows you do to things like switch on a TV. Raspberry Pi uses this, also DVD plays and the like. As far as I can tell however, there are no CEC commands to blank the screen (although you can switch the TV off, which isn’t what I want as it’s too slow).

There’s also something called DPMS, which allows you to put a screen in hibernation and other modes for HDMI and DVI too, I think. On linux systems you can set it using xset; however, it only seems to work once. I lost a lot of time on this. DPMS is pretty messy and hard to debug. After much strugging, and thinking it was my C++ / system calls that were the problem, replacing them with an X library and finally making shell scripts and small C+ testcases, I figured it was just a bug.

Just for added fun, there’s no vbetool on the Pi, which would have been another method;
and TVService is too heavyweight.

I finally got it to work with xset s, which makes a (blank?) screensaver come on, and is near-instant, though that means I have to run X.

xset s activate #screen off
xset s reset #screen on

3. Something to play video

This is as far as I’ve got: I need to do a clean install and try Epiphany again, and also work out why OMXPlayer seems to cancel the screensaver somehow.

——————————————————–

Links

Installing opencv on a pi

Face recognition and opencv

Python camera / face detection

CEC

Best guide on power save / screen blanking

DPMS example code

xset

C++

xscreensaver

vbetool

tvservice

Samsung tv hacking

Run browser on startup

Notes on installing and running opencv on a pi with video

1. sudo raspi-config
expand file system
enable camera
reboot

2. sudo apt-get update && sudo apt-get -y upgrade

3. cd /opt/vc
sudo git clone https://github.com/raspberrypi/userland.git

sudo apt-get install cmake

cd /opt/vc/userland 
sudo chown -R pi:pi .
sed -i 's/DEFINED CMAKE_TOOLCHAIN_FILE/NOT DEFINED CMAKE_TOOLCHAIN_FILE/g' makefiles/cmake/arm-linux.cmake

sudo mkdir build
cd build
sudo cmake -DCMAKE_BUILD_TYPE=Release ..
sudo make
sudo make install

Go to /opt/vc/bin and test one file typing : ./raspistill -t 3000

4. install opencv

sudo apt-get install libopencv-dev python-opencv

cd
mkdir camcv
cd camcv
cp -r /opt/vc/userland/host_applications/linux/apps/raspicam/*  .

curl https://gist.githubusercontent.com/libbymiller/13f1cd841c5b1b3b7b41/raw/b028cb2350f216b8be7442a30d6ee7ce3dc54da5/gistfile1.txt > camcv_vid3.cpp

edit CMakeLists.txt:

[[
cmake_minimum_required(VERSION 2.8)
project( camcv_vid3 )
SET(COMPILE_DEFINITIONS -Werror)
#OPENCV
find_package( OpenCV REQUIRED )

include_directories(/opt/vc/userland/host_applications/linux/libs/bcm_host/include)
include_directories(/opt/vc/userland/host_applications/linux/apps/raspicam/gl_scenes)
include_directories(/opt/vc/userland/interface/vcos)
include_directories(/opt/vc/userland)
include_directories(/opt/vc/userland/interface/vcos/pthreads)
include_directories(/opt/vc/userland/interface/vmcs_host/linux)
include_directories(/opt/vc/userland/interface/khronos/include)
include_directories(/opt/vc/userland/interface/khronos/common)
include_directories(./gl_scenes)
include_directories(.)
add_executable(camcv_vid3 RaspiCamControl.c RaspiCLI.c RaspiPreview.c camcv_vid3.cpp
RaspiTex.c RaspiTexUtil.c
gl_scenes/teapot.c gl_scenes/models.c
gl_scenes/square.c gl_scenes/mirror.c gl_scenes/yuv.c gl_scenes/sobel.c tga.c)
target_link_libraries(camcv_vid3 /opt/vc/lib/libmmal_core.so
/opt/vc/lib/libmmal_util.so
/opt/vc/lib/libmmal_vc_client.so
/opt/vc/lib/libvcos.so
/opt/vc/lib/libbcm_host.so
/opt/vc/lib/libGLESv2.so
/opt/vc/lib/libEGL.so
${OpenCV_LIBS}
pthread
-lm)
]]

cmake .
make
./camcv_vid3.cpp

Tangible enough

Thinking about the purpose of prototypes:

Make new and upcoming technologies and standards tangible enough to help people think through the consequences of them.

Technology is moving fast, but it is also unevenly distributed, and the consequences – good and bad – of emerging technologies may only become apparent as they move into the mainstream. By making these consequences tangible early we can choose between possible futures.

Links

MIT Design Fiction Research Group

What do Prototypes Prototype? by Stephanie Houde and Charles Hill

Intel’s Tomorrow project

What is Radiodan for?

This is my view only, and there’s a certain amount of thinking out loud / lack of checking / potentially high bullshit level.

Yesterday I was asked to comment on a Radiodan doc and this popped out:

Radiodan is a radical new way of putting audiences at the heart of media device development using imaginative forecasting / design fiction, participatory design, concurrent standards development with user testing, and user testing of new radio features on real devices.

1. Design fiction: building real-ish physical devices make possible media devices futures more realistic for people and thereby enables the exploration of more of the radio (and potentially TV) user-need-space (and its mapping to “product space”).
2. Radiodan enables Digital Creativity: the explosion of ideas generated by the Radiodan Creative Process©®™ shows that familiar media devices are a very productive area for generating new ideas by experts and audiences.
3. Concurrent code-and-standards development: most radio (and TV) standards are made without testing on users or running code – implementation comes afterwards. This means thousands of engineer hours invested before knowing if anyone wants the features. Testing devices and features on users stops this incredible waste.
4. Real user input into real radios: new real features, pre-tested on users on realistic devices, to get genuine feedback, for inclusion into new or existing digital or physical products.

All these require fast, iterative, flexible, cheap but robust development, so that behaviour is known and testing can measure the difference. Techniques from web development applied to devices offer a way to do this. Research suggests that people make better decisions when shown product features embodied in devices in this way [citation needed], regardless of whether the end product is physical or digital.

This morning I’m wondering how much of that is true, so I thought I’d develop some of the ideas a bit further.

Design Fiction / Scifi and Radiodan

I’m becoming increasingly interested in design fiction and also scifi (thanks mainly to Richard Sewell) as a way of expanding our imagination about what future possibilities there could be (here’s a discussion about the relationship between design fiction and scifi).

I think that a great many products, services and the like get stuck at local maxima and it’s hard then to think about possible alternative futures. Richard Pope has an interesting take on this area.

What’s become clear to me is that (a) silly or surprising or unusual things make people think about alternatives more creatively (b) their thinginess is an inherent part of that …er…surprisal. I don’t know why it’s the case, but making unusual physical things that embody new possibilities and features is a great way of getting people – us – to think more imaginatively about future possibilities and so expand the space around which products can be built.

Radiodan is great for that, because it enables us to make peculiar new new radio-like things very quickly (like this one) and hereby trigger one of these digressions into a different part of product-space or user-need-space. It frees up your brain to think about what you want to make, rather than how.

Digital Creativity

This second point is also about expanding the possible space but in a different way. The “Radiodan Creative Process©®™ has no ©®™ at all (those are a joke!) and is not really a process because it’s so simple: start talking about radios and everyone says “oh I want one that does …”. Give someone a postcard with a rough drawing of a radio on it and some stickers representing buttons and dials and similar things, and ask them what they want their radio to do and almost everyone will start drawing or writing something on it about what they want from a radio.

lots_of_radios

Once you’ve got over the idea that radios could be different, then ideas pop out of people so easily – and it’s so nice to experience that. This is the beginning of participatory design I think, and it’s amazing when it works.

The “process” works for other things too (we created our own avatars using a similar idea and a simple sketch of a robot: that’s me in the middle).

avatars

I’ve headed this section “Digital Creativity” because that’s the term the BBC is using for its work around getting people to code. The BBC rightly sees it as broader than coding, but for me what’s missing is this (horrible phrase) elicitation of requirements. Finding out what a user needs and describing those needs. Coding (or anything else that’s “digitally creative” for most people has to be for a reason. For those who don’t need a reason, they’re probably already coding….

The other aspect is that Radiodan is simple enough such that anyone with Javascript can have a go at hacking their own version. It almost looks like an afterthought here, but it’s not: it’s an absolutely integral part of Radiodan that it lowers the barrier to making a real or nearly real device, and for “Digital Creativity” Radiodan could be a simple kit (maybe a PCB, maybe a cardboard flatpack box) that (nearly) anyone could make their own radio with. Lowering barrier to entry runs as a principle throughout the project, and is applicable to lots of potential users, who could be inside or outside the BBC.

Concurrent code-and-standards development

I’ve been peripherally involved in the W3C for many years and so it feels completely natural to me that standards work will have running code associated with standards while the standards are being made (also, often tests, and detailed usecases; and many standards are based on existing products). Similarly, the IETF has as a “founding belief” “rough consensus and running code“.

As I understand it, that’s often not the way that radio and TV standards work (but this is the part I’m least confident about, so please do comment). Instead, there are some broad usecases that the group has in mind when it starts, then some engineers create very precise specifications about the features to be implemented, and then implementation starts. Of course the engineers know about the capacities of their technologies so they can be fairly confident about what’s technically possible. But the missing part for me is whether anyone actually wants the thing that’s being specified.

These kinds of standards seem to be built on the principle of “build it and they will come”: make a technology that’s interesting and flexible enough and people will build interesting applications on top of it and the end users will be happy. But it’s a risky strategy – overlap between technologies that are sufficiently flexible to enable a profusion of interesting applications but are also sufficiently precisely specified to make interoperability practical – happens very rarely. But this is the only strategy available, because it’s so difficult and slow to develop for Radios (and TVs).

IRT are doing some very interesting work in making a hybrid radio from a Pi, which could enable concurrent standards and code – which I think would be be a massive step forward; but I’m talking about something different: to test using prototypes before standardisation starts to see how and whether people will actually use the features enabled by the standard.

In one sense this is simply paying more than lip-service to usecases, but Radiodan and things like it for TV can speed up this pre-standards process. I’d argue that you can get a long way with paper prototyping to get feedback on new products and features, but there’s nothing like putting a working device in front of a human and seeing what they do with it, and that’s what we can now do with Radiodan. Suddenly we can find out if people want the thing. Risk is reduced without reducing creativity.

Real user testing of novel real radios

Then this is the final thing we could do – but haven’t yet – with Radiodan. Make a “living lab” of people who could test radios such that those features could go into a radio product. Whether it is appropriate for an organisation like the BBC is quite a different matter, but we can at least see how it might be possible to do it, using deployment and development tools from the web applied to devices.

The most interesting part

I love the participatory design part (I suppose this is why people become teachers) but the most interesting piece of the puzzle is that what seems to happen is that people can better visualise features when they form part of a physical object than part of a digital one. Maybe it’s something to do with affordances and the intuitions we get when we interact with something physically – and I’ve found no academic evidence so far – but it’s a real effect. What’s more, it applies even when you get people to think about physical objects, which is both amazing and also much quicker.

I put at the start “Radiodan is a radical new way of putting audiences at the heart of media device development…” which just shows you what clichéd bullshit can flow out of my typing just to get things moving, but I do, truly, think it is a radical way of thinking of, testing and making new devices. I hope I’ve managed to explain why I think that, finally.

A quick Radiodan: Exclusively Archers

I made one of these a few months ago – they’re super simple – but Chris Lynas asked me about it, so I thought I should write it up quickly.

It’s an internet radio that turns itself on for the Archers then turns itself off again afterwards. It’s not very smart: it just uses a crontab and the default Radiodan application’s REST API. It would be more interesting to do it based on real-time metadata. I’ll have a look at that at some point, but this does the job, as the Archers doesn’t move round much at all.

exclusively archers postcard

You’ll need: A Raspberry Pi, a speaker with a 3.5mm jack, a USB wifi card (ideally RT5370 chipset), an 8GB micro SD card, a power supply for the PI. And a nice box to put it in.

1. Provision the SD card

If none of this is familiar you might want to follow the official Raspberry Pi instructions. I did this on Mac OS X – it ought to work on linux.

Get the Radiodan 8GB SD card image:

curl -O http://dev.notu.be/2014/12/radiodan/2014-12-23-radiodan.img.gz

Plug the SD card into the laptop using a card reader. List disks:

$ diskutil list

Find the disk number (e.g. “2”). Unmount that disk.

$ diskutil unmountDisk /dev/diskX

Unzip and write the image in 1 step:

$ gzip -dc /path/to/2014-12-23-radiodan.img.gz | sudo dd of=/dev/diskX bs=1m

2. Get the wifi working

Plug everything into the Pi, add the SD card and turn it on.

Then, either:

Wait for it to boot up, look for and connect to the wifi it generates (radiodan-configuration) and follow the instructions on the captive portal it generates, returning to your usual wifi afterwards.

The success of this depends on your chipset – I’ve done a quick analysis here and some just won’t work.

rdan_wifi2

OR, login and add your wifi details to /etc/wpa_supplicant/wpa_supplicant.conf –

network={
ssid="YOUR_NETWORK_NAME"
psk="YOUR_NETWORK_PASSWORD"
}

usually does the trick. Either way, reboot afterwards.

3. Add a crontab

Ssh in :

$ ssh pi@radiodan.local

password “raspberry”

$ crontab -e

This is for the Archers, using the Radiodan REST API:

# turn it on at 14:02 weekdays
2 13 * * 1-5  /usr/bin/curl -X POST http://localhost/radio/service/radio4  

# turn it off at 14:15 weekdays
15 13 * * 1-5,7  /usr/bin/curl -X DELETE http://localhost/radio/power

# turn it on at 19:02 weekdays and sundays
2 18 * * 1-5,7  /usr/bin/curl -X POST http://localhost/radio/service/radio4 

# turn it off at 19:15 weekdays and sundays
15 18 * * 1-5,7  /usr/bin/curl -X DELETE http://localhost/radio/power

# turn it on at 10:00 on sundays for the omnibus edition
0 9 * * 7 /usr/bin/curl -X POST http://localhost/radio/service/radio4

# and turn it off at 11:15
15 10 * * 7 /usr/bin/curl -X DELETE http://localhost/radio/power

Save the crontab file.

4. Put it in a box

And enjoy your Radiodan :-)

exclusively archers box

A quick analysis of wifi cards for using a Raspberry Pi as an access point

When Radiodan can’t access the web, it throws up an access point (AP) created by the Pi: you connect directly to that and it displays the available wifi points in a webpage as a captive portal, and asks you to add the password for the one you want. It’s not easy to get credentials for wifi to objects with no user interface, and this is the best one we’ve found so far (Chromecast does something similar).

However our home tests suggest that the many varieties of wifi USB cards and access points cause problems of two major kinds. One a general problem of Radiodan accessing the internet – the wifi access point in a house might just be too far away. The other is that some wifi cards don’t seem to work as access points properly, and we wanted to get to the bottom of where the problems were there, as identified with the kind help of Giles Booth (@blogmywiki). This is so that we can recommend wifi cards to go with Radiodan or at least so we can say which are known to work.

I’ve started on the second problem here. I had 6 USB wifi cards in the house of various kinds: left to right: Wipi, Comfast, Micronet, Tenda, Loglink, Edimax
A selection of wifi cards: left to right: Wipi, Comfast, Micronet, Tenda, Loglink, Edimax

Here’s what I found for each.

Test setup:

  • 5 metres from an apple wifi point
  • Mac OS 10.9.5 Macbook Air laptop
  • 30 cm between laptop and Pi

Test conditions:

  • Radiodan clean image, boot up, try and connect to the radiodan-configuration AP from laptop

Results

Name Result libusb output
WiPi Network appears in 3m 10s. Bus 001 Device 004: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter

Comfast cf-wu720n Network appears in 3m 10s. Bus 001 Device 007: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter

Micronet SP907NS Nothing after 4 minutes Bus 001 Device 006: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter

Tenda with ariel w311U+ Network appears in < 3m Bus 001 Device 009: ID 148f:3070 Ralink Technology, Corp. RT2870/RT3070 Wireless Adapter

Loglink wl0084B Network appears but I can’t connect to it Bus 001 Device 008: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter

Edimax Nothing after 4 minutes Bus 001 Device 005: ID 7392:7811 Edimax Technology Co., Ltd EW-7811Un 802.11n Wireless Adapter [Realtek RTL8188CUS]

So basically I found two different manufacturers, Ralink and Realtek and the Realtek ones don’t work as APs with our setup.

It should be possible to get the Realtek ones working – there’s a detailed description here – and I managed to get a network showing up using those instructions but within our framework, but I couldn’t connect to it – so we’ll have to test this some more.

How to find your wifi card’s chipset

The best way it to plug it into a Raspberry pi that you are already connected to and type:

$ pi@radiodan ~ $ lsusb

This will give you a list of everything connected to USB, in my case (I had two wifi cards connected):

Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. 
Bus 001 Device 004: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter
Bus 001 Device 006: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter

On Mac OS X you can look in apple menu -> about this mac -> more info -> system report and then click on USB – but you’ll need to google the vendor name and id to get the chip.