Author Archives: libbymiller

Libbybot – a presence robot with Chromium 51, Raspberry Pi and RTCMultiConnection for WebRTC

I’ve been working on a cheap presence robot for a while, gradually and with help from buddies at hackspace and work. I now use it quite often to attend meetings, and I’m working with Richard on ways to express interest, boredom and other emotions at a distance, expressed using physical motion (as well as greetings and ‘there’s someone here’) .

I’ve noticed a few hits on my earlier instructions / code, so thought it was worth updating my notes a bit.

(More images and video on Flickr)

The main hardware change is that I’ve dispensed with the screen, which in its small form wasn’t very visible anyway. This has led to a rich seam of interesting research around physical movement: it needs to show that someone is there somehow, and it’s nice to be able to wave when someone comes near. It’s also very handy to be able to move left and right to talk to different people. It’s gone through a “bin”-like iteration, where the camera popped out of the top, a “The Stem“-inspired two sticks version (one stick for the camera, one to gesture), and is now a much improved IKEA ESPRESSIVO lamp hack with the camera and gesture “stick” combined again. People like this latest version much more than the bin or the sticks, though I haven’t yet tried it in situ. Annoyingly the lamp itself is discontinued, a pity because it fits a Pi3 (albeit with a right angled power supply cable) and some servos (using servos on the Pi directly with ServoBlaster) rather nicely.

The main software change is that I’ve moved it from EasyRTC to RTCMultiConnection, because I couldn’t get EasyRTC to work with data+audio (rather than data+audio+video) for some reason. I like RTCMultiConnection a lot – it’s both simple and flexible (I get the impression EasyRTC was based on older code while RTCMultiConnection has been written from scratch). The RTCMultiConnection examples and code snippets were also easier to adapt for my own, slightly obscure, purposes.

I’ve also moved from using a Jabra speaker / mic to a Sennheiser one. The Jabra (while excellent for connecting to laptops for improving the sound on Skype and similar meetings) was unreliable on the Pi, dropping down to low volume and with the mic periodically not working with Chromium (even when used with  a powered USB hub). The Sennheiser one is (even) pricer but much more reliable.

Hope that helps, if anyone else is trying this stuff. I’ll post the code sometime soon. Most of this guide still holds.

view_from_user

 

 

 

 

Immutable preferences, economics, social media and algorithmic recommendations

One of the things that encouraged me to leave economics after doing a PhD was that – at the time, and still in textbook microeconomics – the model of a person was so basic it could not encompass wants and needs that change.

You, to an economist, usually look like this:

217px-simple-indifference-curves-svg

“Indifference Curves” over two goods by SilverStar at English WikipediaCC BY 2.5

You have (mathematically-defined) “rational” preferences between goods and services, and these preferences are assumed to stay the same. Since I’d done a degree which encompassed philosophy and politics as well as economics this annoyed me tremendously. What about politics? arguing? advertising? newspapers? alcohcol? moods? caffeine? sleepiness? Economics works by using simplified models, but the models were far too simplistic to encompass effects I thought were interesting. The wonderful, now-dead M. O. L. Bacharach helped me understand game theory which had a more sophisticated model of interactions and behaviour. Eventually I found Kahneman and Tversky‘s work on bounded rationality. As part of my PhD I came across the person-time-slices work of Derek Parfit and the notion of discontinuous personhood.

Ten years after I left the Economics, behavioural economics (which spawned Nudge, advertising principles applied to behavoural change) became mainstream. Scroll forward twenty years and I can see a simplistic view of the things that people want appearing again, but this time as media recommendations and social media content-stream personalisation. Once again there’s an underlying assumption that there’s something fundamental to us about our superficial wants, and that these “preferences” are immutable.

It’s naive to assume that because I have bought a lamp that I’ll want to buy more lamps and therefore lamps should follow me across the internet. It’s silly to assume that because I watched Midsomer Murders repeats last night while programming I’ll also want to watch it this evening with my partner. It’s against the available evidence to assume that my preferences will not change if I am constantly subjected to a stream of people or other sources expressing particular views. It’s cynical to base a business model on advertising and simultaneously claim that filtering algorithms used in social media do not affect behaviour. My options and wants are not immutable: they depend on the media I see and hear, as well as how I feel and who I’m with, where I go, and who I talk to. It’s not just about variations on a theme of me either: they can and will change over time.

I’m looking at the media side of this in my day job. I think personalised media recommendations are wrongheaded in that they assume there’s a fundamental “me” to be addressed; and I think that hyper-personalised recommendations can be hugely damaging to people and civic society. I think that negotiated space between people of different options is an essential component of democracy and civilised living. I think a part of this is giving people the opportunity and practice of negotiating their shared media space by using media devices together. So that’s what we’re doing.

Anyway. Rant over. Back to libbybot.

Moving to Kolab from Google mail (“G-Suite”)

I had a seven year old google ‘business’ account (now called ‘G-Suite’, previously Google Apps, I think) from back when it was free. Danbri put me onto it, and it was brilliant because you can use your own domain with it. It’s been very useful, but I’ve been thinking of moving to a paid-for service for a while. Kolab Now have strong security and privacy, a readable ToS, and with a group account you can have your own domain. I finally moved my mail today.

It wasn’t hard but this isn’t the sort of thing I do a lot and the information I found was a bit piecemeal. So here are some hints in case it’s useful to anyone else.

  1. Sign up for Kolab Now. You’ll need another email address to do that from, i.e. not the one you’re moving. You’ll get an invoice to pay a bit later to that email (I was a bit surprised not to pay immediately). It’s about £5 / month for their group ‘lite’ service.
  2. Archive your old email from Google. I used gmvault‘s mac os x binary. I was originally intending to sync my email with my new account, but storing as much as I have costs substantially more on Kolab Now, so I’ve make an archive instead (which I can still access from mail.app). I had 14 GB in there, which took ~ 10 hours
    ./gmvault sync me@mydomain.org
    ./gmvault export ~/old_email
  3. Log into the admin account of your google account and delete all the non-admin users. You get the option to download their data as you go, which is an excellent feature, kudos to Google.
  4. Actually deleting the account required going to ‘Billing’ as described here in step 5
  5. Login to your domain name registrar and delete the Google MX records
  6. Add the Kolab Now TXT or CNAME records to verify you own the domain (once logged in, Kolab’s instructions for this are under ‘hosting‘)
  7. Once verified, hosting -> show DNS records will contain the new Kolab MX records, which you’ll need to add to your domain DNS
  8. Straight away you should be able to see email in the Kolab Now web client, and depending on the TTL you’ve set, shortly afterwards to send to it
  9. Mail.app IMAP instructions are here
  10. To access the archive you made, use the mail.app’s file->import mailbox function.

A simple Raspberry Pi-based picture frame using Flickr

I made this Raspberry Pi picture frame – initially with a screen – as a present for my parents for their wedding anniversary. After user testing, I realised that what they really wanted was a tiny version that they could plug into their TV, so I recently made a Pi Zero version to do that.

It uses a Raspberry Pi 3 or Pi Zero with full-screen Chromium. I’ve used Flickr as a backend: I made a new account and used their handy upload-by-email function (which you can set to make uploads private) so that all members of the family can send pictures to it.

frame

I initially assumed that a good frame would be ambient – stay on the same picture for say, 5 minutes, only update itself once a day, and show the latest pictures in order. But user testing – and specifically an uproarious party where we were all uploading pictures to it and wanted to see them immediately – forced a redesign. It now updates itself every 15 minutes, uses the latest two plus a random sample from all available pictures, shows each picture for 20 seconds, and caches the images to make sure that I don’t use up all my parents’ bandwidth allowance.

The only tricky technical issue is finding the best part of the picture to display. It will usually be a landscape display (although you could configure a Pi screen to be vertical), so that means you’re either going to get a lot of black onscreen or you’ll need to figure out a rule of thumb. On a big TV this is maybe less important. I never got amazing results, but I had a play with both heuristics and face detection, and both moderately improved matters.

It’s probably not a great deal different to what you’d get in any off the shelf electronic picture frame, but I like it because it’s fun and easy to make, configurable and customisable. And you can just plug it into your TV. You could make one for several members of a group or family based on the same set of pictures, too.

Version 1: Pi3, official touchscreen (you’ll need a 2.5A power supply to power them together), 8GB micro SD card, and (if you like) a ModMyPi customisable Pi screen stand.

Version 2: Pi Zero, micro USB converter, USB wifi, mini HDMI converter, HDMI cable, 8GB micro SD card, data micro USB cable, maybe a case.

The Zero can’t really manage the face detection, though I’m not convinced it matters much.

It should take < 30 minutes.

The code and installation instructions are here.

 

 

LIRC on a Raspberry Pi for a silver Apple TV remote

The idea here is to control a full-screen chromium webpage via a simple remote control (I’m using it for full-screen TV-like prototypes). It’s very straightforward really, but

  • I couldn’t find the right kind of summary of LIRC, the linux infrared remote tool
  • It look me a while to realise that the receiver end of IR was so easy if you have a GPIO (as on a Raspberry Pi) rather than using USB
  • Chromium 51 is now the default on Raspberry Pi’s Jessie, making it easy to do quite sophisticated web interfaces on the Pi3

The key part is the mapping between what LIRC (or its daemon version) does when it hears a keypress and what you want it to do. Basically there’s

It’s all a bit confusing as we’re using remote key presses mapped to the nearest keyboard commands in HTML in Javascript. But it works perfectly.

I’ve provided an example lircd.conf for silver Apple TV remotes, but you can generate your own for any IR remote easily enough.

The way I’ve used xdotool assumes there’s a single webpage open in chromium with a specific title – “lirc-example”.

Full instructions are in github.

ir_pi3

Links

lirc

xdotool

irexec