Radiodan Part 2: Drawing customisations to discover latent user needs

As I explained in the previous post, using the Radiodan postcards, we persuaded people to talk about what they wanted their radios to do. The process was asking them to customise a simple sketch of a radio by drawing and writing on it and by adding stickers representing buttons, dials and lights, in answer to the question “what would your radio do?”.

Novelty was high over the collection of cards, evidenced by our colleagues, unprompted, designating one dimension of their analysis of the cards “wacky“.

Diversity was also high. Themes emerged, but they were on quite different dimensions.

Radiodan_workshop_mapping
(pdf)

Identifying Latent User Needs

I think the postcards may be a technique for cheaply identifying some kinds of “latent user needs” (the things people don’t know they want, or don’t ask for).

robin

Product owners in companies often rely on their own intuitions and experiences to uncover their own latent needs. This is easy and cheap, but subject to groupthink (the sort of process that leads to business models like “I need X and all my friends do, therefore lots of other people will”).

An alternative route to discovering what you don’t know is to undertake ethnographic studies, such as diary studies or observational studies where you video or otherwise record current behaviour in situ and then analyse the results. The extent of use of these is limited by the significant expense, time and expertise required.

Getting people to draw postcards is obviously far less informative than a proper ethnographic study (or an in-depth interview), but it’s very fast and cheap, so you can ask more people about their experiences and so reach different kinds of people. It also has some nice inklings of Participatory or Co-Design where people start to become actively part of product design.

Understanding Why It Worked

To extrapolate it to other devices or areas I need to work out why it worked. I’m neither a usability researcher nor a psychologist, so I’d very much welcome comments from those with greater expertise.

I think it worked for two reasons:

The first was the way we structured the conversation to use the present to think about the future. I’ve started noticing a great deal of this since I saw Matt Novak give a talk called “A Brief History of Tomorrow” at dConstruct 2015. He argued that the future is always envisioned using the current technologies, for example by a video phone from the 1900s illustrated as a combination of a static telephone and projector.

videophone
(source)

The new idea is there, superimposed on the objects of that time.

The loveliest and most radical example I’ve come across was the leap of abstraction made by Charles Babbage and Ada Lovelace to reimagine the punch-card-based Jacquard loom as a computer.

Just because someone can make that jump to the future based on objects present doesn’t mean they will, but nevertheless if you start with a thing that is well-understood and familiar, you may be able to get people to extrapolate the future they want from it[*].

The second reason relates to a form of prototyping known as ‘pretotyping‘ – cheaply understanding how you might use an object by pretending it already exists. Famously, Jeff Hawkins carried around a suitably-sized piece of wood to give himself a physical intuition about what it would be like to own and use a Palm Pilot, before building a more expensive working prototype.

My unproven suspicion with the postcards is that just thinking about and drawing objects engages your physical intuitions in a similar way, giving authentic insights into the role that object plays in your life.

We’re all going to Productland

If I’m right about these two reasons, then the novelty and diversity we found comes from the diversity of the people that we talked to, and the differences in how their lives bump into radio. This makes the cards excellent starting points for exploring Richard Pope’s multi-dimensional version of Productland.

In Part 3 I’ll talk about the physical prototypes we made, really this time.


 

[*] In the radio case, this all falls down when you talk to children, who don’t tend to listen to radio at all.

Hackspace Hat quick install (or: audio and video streaming from a Raspberry Pi to a remote or local WebRTC-compatible-browser)

I’ve been distracted by other things, but just in case it’s useful to anyone, here’s how to make a HackspaceHat with one-way streaming audio and video (i.e. audio and video streaming from a Pi to a remote server).

We’re now thinking about different ways of doing this, as we have been having lots of problems with ports, and we’re using WebRTC only to stream to a server, not peer-to-peer, and only one-way. But anyway. It’s easy and it mostly works.

This excellent article got me started – I added audio, and the remote server, plus the gstreamer commands have changed slightly.

You will need:

hsh_innards

Set up the Pi

This is how I do it on a Mac. Here are the official instructions.

diskutil list
diskutil unmountDisk /dev/diskn
sudo dd bs=1m if=~/Downloads/2015-05-05-raspbian-wheezy.img of=/dev/rdiskn

(where ‘n’ is the disk number from diskutil list, on mine, usually, ‘2’).

I’ve not tried this with jessie yet – no reason to think it wouldn’t work.

Then boot up the Pi and ssh in using network sharing and an ethernet cable, or use a keyboard and HDMI screen. Log in.

Finish the config and enable camera

sudo raspi-config

  • expand filesystem
  • enable camera

Reboot.

On the Pi

Here we set up gstreamer to send the output of the Raspi camera to Janus, a webRTC gateway, which can be on a local or remote server.

Update the Pi

sudo apt-get update -y && sudo apt-get upgrade -y

Install gstreamer

sudo apt-get install libgstreamer0.10-0 libgstreamer0.10-dev gstreamer0.10-tools gstreamer0.10-plugins-base libgstreamer-plugins-base0.10-dev gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly gstreamer0.10-plugins-bad libgstreamer-plugins-base1.0-dev

Install Janus on the pi

For a self-contained, easily testable system we also need Janus and nginx on the Pi. Later we’ll install these two on the server too.

Install prerequisites:

sudo aptitude install libmicrohttpd-dev libjansson-dev libnice-dev libssl-dev libsrtp-dev libsofia-sip-ua-dev libglib2.0-dev libopus-dev libogg-dev libini-config-dev libcollection-dev pkg-config gengetopt libtool automake dh-autoreconf

Install Janus:

git clone https://github.com/meetecho/janus-gateway.git
cd janus-gateway
sh autogen.sh
./configure --disable-websockets --disable-data-channels --disable-rabbitmq --disable-docs --prefix=/opt/janus
make
sudo make install
sudo make configs

Configure Janus

sudo pico /opt/janus/etc/janus/janus.plugin.streaming.cfg

[gst-rpwc]
type = rtp
id = 1
description = RPWC H264 test streaming
audio = yes
audioport = 8005
audiopt = 10
audiortpmap = opus/48000/2
video = yes
videoport = 8004
videopt = 96
videortpmap = H264/90000
videofmtp = profile-level-id=42e028\;packetization-mode=1

Install nginx

sudo aptitude install nginx

Install a modified HTML page


cd /usr/share/nginx/www

sudo cp -r /home/pi/janus-gateway/html/* .

sudo curl -O https://gist.githubusercontent.com/libbymiller/70ad942d070853167659/raw/acc38f4ccefb2c333a4cca460ec52a3151cbeebd/janus-gateway-streamtest.html

sudo service nginx start

Test

in one terminal window:

raspivid --verbose --nopreview -hf -vf --width 640 --height 480 --framerate 15 --bitrate 1000000 --profile baseline --timeout 0 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=127.0.0.1 port=8004 alsasrc device=plughw:Set ! audioconvert ! audioresample ! opusenc ! rtpopuspay ! udpsink host=127.0.0.1 port=8005

in another:

/opt/janus/bin/janus -F /opt/janus/etc/janus

Go to http://pi-ip/janus-gateway-streamtest.html in Firefox (Chrome doesn’t support h264)

You should get audio and video streaming from your Pi.

On a server

Next we want to stream that to a remote server. The simplest way of doing this is to tell gstreamer to send the streams to the IP of the remote server. In theory we ought to be able to get WebRTC to handle this via Janus, but I’ve not been able to figure out how, nor whether it’s worth it – for our usecase you’re always going to want to stream via a server, we think.

Deploy a server

I used a Linode – Linode 2048, 48GB DISK 2 CPU Cores 3TB XFER .03/hr to $20/mo with 64 bit Ubuntu 14 LTS.

Install Janus and Nginx on the server

sudo apt-get update -y && sudo apt-get upgrade -y

sudo aptitude install libmicrohttpd-dev libjansson-dev libnice-dev libssl-dev libsrtp-dev libsofia-sip-ua-dev libglib2.0-dev libopus-dev libogg-dev libini-config-dev libcollection-dev pkg-config gengetopt libtool automake dh-autoreconf

you may need to do

sudo apt-get install gupnp-igd-1.0

Install Janus:

sudo apt-get install git

git clone https://github.com/meetecho/janus-gateway.git
cd janus-gateway
sh autogen.sh
./configure --disable-websockets --disable-data-channels --disable-rabbitmq --disable-docs --prefix=/opt/janus
make
sudo make install
sudo make configs

Configure Janus

sudo pico /opt/janus/etc/janus/janus.plugin.streaming.cfg

[gst-rpwc]
type = rtp
id = 1
description = RPWC H264 test streaming
audio = yes
audioport = 8005
audiopt = 10
audiortpmap = opus/48000/2
video = yes
videoport = 8004
videopt = 96
videortpmap = H264/90000
videofmtp = profile-level-id=42e028\;packetization-mode=1

Install nginx

sudo aptitude install nginx

Install a modified HTML page

cd /usr/share/nginx/html/

sudo cp -r /opt/janus/share/janus/demos/* .

sudo curl -O https://gist.githubusercontent.com/libbymiller/70ad942d070853167659/raw/acc38f4ccefb2c333a4cca460ec52a3151cbeebd/janus-gateway-streamtest.html

sudo service nginx start

Test it

On the pi:

raspivid --verbose --nopreview -hf -vf --width 640 --height 480 --framerate 15 --bitrate 1000000 --profile baseline --timeout 0 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=ip-of-server port=8004 alsasrc device=plughw:Set ! audioconvert ! audioresample ! opusenc ! rtpopuspay ! udpsink host=ip-of-server port=8005

On the server:

/opt/janus/bin/janus -F /opt/janus/etc/janus

on the server you should see

[gst-rpwc] New video stream! (ssrc=460806300)
[gst-rpwc] New audio stream! (ssrc=3733839967)

go to http://ip-of-server/janus-gateway-streamtest.html in Firefox.

Enabling Wifi on the Pi

Just for reference.

Wpasupplicant is normally installed. If it isn’t, do:

sudo apt-get install -y wpasupplicant

Then edit:

sudo pico /etc/wpa_supplicant.conf

network={
ssid="YourWifiNetworkName"
psk="YourWifiNetworkPassword"
}

A version that works only within 10-30 metres

This version uses the wifi card as an access point, so to view the video / audio you connect to the Pi’s access point and then go to the web page on the Pi. Not sure how useful it is, but it’s quite fun, and easy, as you don’t need a separate server on the internet.

Create access point

This is a script we used for Radiodan, slightly modified to just create an access point.

git clone https://github.com/radiodan/provision.git
cd provision

replace the contents of steps/wpa/install.sh

with

sudo apt-get install -y --force-yes dnsmasq && \
sudo apt-get install -y --force-yes ruby1.9.1-dev hostapd=1:1.0-3+deb7u2 wpasupplicant && \
sudo gem install --no-ri --no-rdoc wpa_cli_web

then

sudo mkdir /var/log/radiodan
sudo LOG_LEVEL=DEBUG ./provision avahi nginx wpa

change name of network

sudo pico /etc/hostapd/hostapd.conf

change ssid to

ssid=hackspacehat

edit /etc/nginx/sites-enabled/wpa_cli_web_redirect just this:

server {
listen 80;
root /var/www/html;
}

edit /opt/radiodan/adhoc/try_adhoc_network

to remove

# Do 6 scans over 1 min
#for i in {1..6}
#do
# echo "Scan $i of 6"
# /sbin/wpa_cli scan
# /bin/sleep 10
#done

and also

#echo "Starting wpa-cli-web"
#/etc/init.d/wpa-cli-web start

remove wpa server from init.d

sudo update-rc.d wpa_cli_web remove
sudo update-rc.d wpa-conf-copier remove

reboot

unplug ethernet if plugged in

connect to hackspacehat to check it works – you should be able to ssh to 10.0.0.200.

(this is overkill, but I haven’t had time to investigate a quicker way)

Janus and gstreamer commands

As in the first example, we just run gstreamer sending the streams to localhost, and Janus on the pi.

So two windows on the pi, one sending the streams:

raspivid --verbose --nopreview -hf -vf --width 640 --height 480 --framerate 15 --bitrate 1000000 --profile baseline --timeout 0 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=127.0.0.1 port=8004 alsasrc device=plughw:Set ! audioconvert ! audioresample ! opusenc ! rtpopuspay ! udpsink host=127.0.0.1 port=8005

one running Janus:

/opt/janus/bin/janus -F /opt/janus/etc/janus

then connect to AP “hackspacehat” and
go to http://10.0.0.200/janus-gateway-streamtest.html

You should get audio and video from your Pi, which will work while you are in range of the Pi’s access point.

Biscuit projects

This is a quote by David Mitchell from an article about David Cameron. It’s not the main point of the article or anything, but I really like it:

“…a microscopic speck of thought, like an infant universe, that can suddenly expand at frightening speed into a fully formed request for a biscuit…”

This is the mechanism for new project ideas, in many different kinds of jobs. Someone senior has the microscopic speck of thought, and before you know it, there is the request for a biscuit, and nothing can stop that biscuit being delivered. It could be a bad idea or a good one, but it already has its own momentum.

Maybe this is a job for catwigs.

Radiodan Part 1: Expanding the Overton window, but for Radios

Radiodan is an Open Source (software and hardware) internet radio platform based around a Raspberry Pi. It was made in BBC Research and Development in the “Device Futures” Team. Dan Nuttall, Andrew Nicolaou and I worked on Radiodan with help from other colleagues, particularly Joanne Moore, Andrew Wood, Chris Needham, and Tim Broom.

Andrew and Dan are leaving the BBC at the end of this week, so now seems like an excellent time to reflect on what it was for and how it was useful.

tl;dr

Our work on Radiodan over the last two years was a two-pronged approach to understanding possible futures for radio. One prong was structured conversations with people about what they wanted from radio. The other was a prototype-led investigation into how to make a radio: what the architecture would look like, how it might be controlled, whether it would work, and how much it would cost to make.

By talking to people about what they wanted, we expanded our own frame of reference about what a radio might be. By grounding the work in physical prototypes we gained a deep understanding of how new radio-like devices might be made. By showing other people those prototypes, we expanded their frame of reference too.

Overton window?

“Expanding the Overton window for Radios” is an analogy too far, probably, but here goes anyway.

At Bristol Hackspace we started getting people to draw what they wanted from a radio. I went around persuading people to tell me about the radio they wanted. Sometimes I drew it for them, and sometimes we got them to draw it themselves, with a few little props (outline of a radio, some stickers for buttons and dials, speakers and lights). Among the many lovely ideas, this is my favourite (by Paul Downey):

psd_radio

When you explain the Archers Avoider, ask people to think about physical radios, and give them this set of simple tools, they say “oh!”, and then they say “my radio would do this: ….” and tell you. We soon had dozens of them.

A couple of years ago, I talked[1] about a “Cambrian Explosion”[2] of radios based on these postcards, from which we could choose the “best” ones:

cambrian

This is a kind of shuffled, messy product-space, a bunch of different features and ideas with no structure and no systematic way of exploring it.

Richard Pope’s mind-expanding blog posts Product Land part 1 and part 2 (which you should read), suggest a more structured approach to exploring this space, by figuring out the dimensions of the product and then exploring different points in that space. I think we are taking different approaches to the same problem. Richard writes:

“My proposition is that digital products are also inherently complex and inherently multidimensional, that design is too often constrained by our methods of thinking about them and too often risk being either derivative or simple iterations of variants as a result; or worse, user needs are never met as well as they could be because we are looking for solutions in the wrong place.”

There is an argument that radios just do a job and do it well. The postcards people made say something else:

There are lots of potential features of radio that people would enjoy that don’t exist.

Many won’t be viable in the slightest, but some might be, and we can’t build them – whether as a physical device or in an app – if they aren’t part of what we consider to be part of the “radio” class of products. As Richard puts it:

“You can’t build what you can’t think of in the first place.”

There’s also a genuine, interesting, argument about accessibility here. If you don’t ask a wide range of people what they want from a device, you probably won’t be able to guess what it is they want or need. The postcards surprised us. We were getting slivers of light into bits of the product space for radio that we hadn’t considered.

Finally to the Overton Window

The Overton window is the current frame of reference in politics, the range of ideas that the public will accept, which determine the viability of policies electorally. Currently, Jeremy Corbyn is held to have shifted the Overton window in the UK to the left, from (a) to (b) like this[3]:

overton

Radiodan is a little like the Jeremy Corbyn[4] of radio devices.

We were using mechanisms to try and expand the acceptable range of what constituted a radio by peering into a messy product space and then translating what we saw into prototypes.

I’ll write about the prototypes we made in part 2.


[1] I made Dan and Andrew put it into their Solid presentation about Radiodan
[2] I’d been listening to this really excellent In Our Time about the Cambrian Explosion
[3] Not to scale
[4] #SorryNotSorry

Thanks to many people for reading drafts of these posts: Damian Steer, Dan Brickley, Richard Sewell, Dan Nuttall, Andrew Nicolaou, Tristan Ferne. All errors and poor analogies are mine.

AWS Janus Gateway strangeness

Can anyone think of a reason why this might happen? (janus gateway is a webRTC gateway – https://planb.nicecupoftea.org/2015/08/15/aws-janus-gateway-strangeness/)

  • Free tier t2.micro AWS, brand new account
  • select ami-886e3aff (via http://cloud-images.ubuntu.com/locator/ec2/), create instance, all defaults; new security group with all tcp and upd ports open; used default vpc, new key
  • chown 400 hsh7.pem
  • ssh -i hsh7.pem -v ubuntu@public-ip
  • installation steps below

result: go to http://public-ip/janus-gateway-streamtest.html in Firefox 40

In my house I can see a testcard.

On no other network can anyone see a testcard.

WHY?

Steps:

sudo apt-get update && sudo apt-get upgrade

sudo aptitude install libmicrohttpd-dev libjansson-dev libnice-dev libssl-dev libsrtp-dev libsofia-sip-ua-dev libglib2.0-dev libopus-dev libogg-dev libini-config-dev libcollection-dev pkg-config gengetopt libtool automake dh-autoreconf gupnp-igd-1.0

git clone https://github.com/meetecho/janus-gateway.git
cd janus-gateway
sh autogen.sh
./configure --disable-websockets --disable-data-channels --disable-rabbitmq --disable-docs --prefix=/opt/janus
make
sudo make install
sudo make configs

sudo pico /opt/janus/etc/janus/janus.plugin.streaming.cfg

[gst-rpwc]
type = rtp
id = 1
description = RPWC H264 test streaming
audio = yes
audioport = 8005
audiopt = 10
audiortpmap = opus/48000/2
video = yes
videoport = 8004
videopt = 96
videortpmap = H264/90000
videofmtp = profile-level-id=42e028\;packetization-mode=1

Installing gStreamer and Nginx

sudo apt-get install libgstreamer0.10-0 libgstreamer0.10-dev gstreamer0.10-tools gstreamer0.10-plugins-base libgstreamer-plugins-base0.10-dev gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly gstreamer0.10-plugins-bad libgstreamer-plugins-base1.0-dev

sudo aptitude install nginx

cd /var/www/html

sudo cp -r /opt/janus/share/janus/demos/* .

sudo curl -O https://gist.githubusercontent.com/libbymiller/70ad942d070853167659/raw/acc38f4ccefb2c333a4cca460ec52a3151cbeebd/janus-gateway-streamtest.html

sudo service nginx start

test – 2 x windows

gst-launch-0.10 videotestsrc ! video/x-raw-rgb,width=640,height=480,framerate=30/1 ! videoscale ! videorate ! ffmpegcolorspace ! timeoverlay ! x264enc bitrate=256000 profile=1 tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=8004

/opt/janus/bin/janus -F /opt/janus/etc/janus