Libbybot – a presence robot with Chromium 51, Raspberry Pi and RTCMultiConnection for WebRTC

Edit, July 2017: I’ve put detailed instructions and code in github. You should follow those if you really want to try it (more).

I’ve been working on a cheap presence robot for a while, gradually and with help from buddies at hackspace and work. I now use it quite often to attend meetings, and I’m working with Richard on ways to express interest, boredom and other emotions at a distance, expressed using physical motion (as well as greetings and ‘there’s someone here’) .

I’ve noticed a few hits on my earlier instructions / code, so thought it was worth updating my notes a bit.

(More images and video on Flickr)

The main hardware change is that I’ve dispensed with the screen, which in its small form wasn’t very visible anyway. This has led to a rich seam of interesting research around physical movement: it needs to show that someone is there somehow, and it’s nice to be able to wave when someone comes near. It’s also very handy to be able to move left and right to talk to different people. It’s gone through a “bin”-like iteration, where the camera popped out of the top, a “The Stem“-inspired two sticks version (one stick for the camera, one to gesture), and is now a much improved IKEA ESPRESSIVO lamp hack with the camera and gesture “stick” combined again. People like this latest version much more than the bin or the sticks, though I haven’t yet tried it in situ. Annoyingly the lamp itself is discontinued, a pity because it fits a Pi3 (albeit with a right angled power supply cable) and some servos (using servos on the Pi directly with ServoBlaster) rather nicely.

The main software change is that I’ve moved it from EasyRTC to RTCMultiConnection, because I couldn’t get EasyRTC to work with data+audio (rather than data+audio+video) for some reason. I like RTCMultiConnection a lot – it’s both simple and flexible (I get the impression EasyRTC was based on older code while RTCMultiConnection has been written from scratch). The RTCMultiConnection examples and code snippets were also easier to adapt for my own, slightly obscure, purposes.

I’ve also moved from using a Jabra speaker / mic to a Sennheiser one. The Jabra (while excellent for connecting to laptops for improving the sound on Skype and similar meetings) was unreliable on the Pi, dropping down to low volume and with the mic periodically not working with Chromium (even when used with  a powered USB hub). The Sennheiser one is (even) pricer but much more reliable.

Hope that helps, if anyone else is trying this stuff. I’ll post the code sometime soon. Most of this guide still holds.

view_from_user

 

 

 

 

Immutable preferences, economics, social media and algorithmic recommendations

One of the things that encouraged me to leave economics after doing a PhD was that – at the time, and still in textbook microeconomics – the model of a person was so basic it could not encompass wants and needs that change.

You, to an economist, usually look like this:

217px-simple-indifference-curves-svg
“Indifference Curves” over two goods by SilverStar at English WikipediaCC BY 2.5

You have (mathematically-defined) “rational” preferences between goods and services, and these preferences are assumed to stay the same. Since I’d done a degree which encompassed philosophy and politics as well as economics this annoyed me tremendously. What about politics? arguing? advertising? newspapers? alcohcol? moods? caffeine? sleepiness? Economics works by using simplified models, but the models were far too simplistic to encompass effects I thought were interesting. The wonderful, now-dead M. O. L. Bacharach helped me understand game theory which had a more sophisticated model of interactions and behaviour. Eventually I found Kahneman and Tversky‘s work on bounded rationality. As part of my PhD I came across the person-time-slices work of Derek Parfit and the notion of discontinuous personhood.

Ten years after I left the Economics, behavioural economics (which spawned Nudge, advertising principles applied to behavoural change) became mainstream. Scroll forward twenty years and I can see a simplistic view of the things that people want appearing again, but this time as media recommendations and social media content-stream personalisation. Once again there’s an underlying assumption that there’s something fundamental to us about our superficial wants, and that these “preferences” are immutable.

It’s naive to assume that because I have bought a lamp that I’ll want to buy more lamps and therefore lamps should follow me across the internet. It’s silly to assume that because I watched Midsomer Murders repeats last night while programming I’ll also want to watch it this evening with my partner. It’s against the available evidence to assume that my preferences will not change if I am constantly subjected to a stream of people or other sources expressing particular views. It’s cynical to base a business model on advertising and simultaneously claim that filtering algorithms used in social media do not affect behaviour. My options and wants are not immutable: they depend on the media I see and hear, as well as how I feel and who I’m with, where I go, and who I talk to. It’s not just about variations on a theme of me either: they can and will change over time.

I’m looking at the media side of this in my day job. I think personalised media recommendations are wrongheaded in that they assume there’s a fundamental “me” to be addressed; and I think that hyper-personalised recommendations can be hugely damaging to people and civic society. I think that negotiated space between people of different options is an essential component of democracy and civilised living. I think a part of this is giving people the opportunity and practice of negotiating their shared media space by using media devices together. So that’s what we’re doing.

Anyway. Rant over. Back to libbybot.