Leaving Flickr

I’m very sad about this, especially because of all the friends I have made on Flickr, but with Verizon’s acquisition of Yahoo (and so Flickr) and the consequent sharing of Flickr user data with the new “Oath” “”””family”””” I feel like it’s time for me to admit just how shit Flickr has become and finally leave. I’ve been using it (and paying for it) for 10 years though, so I’ve a lot of pictures, about 13K in number, about 23G. I’ve got all my data and will delete my account tomorrow (which I think it’s their deadline, but they seem confused about it).

It’s been a busy week so I don’t know what I’ll replace it with yet, maybe something simple and static I’ll write myself like the thing I had 11 years ago, with some, I dunno, RSS feeds or something. But anyway, here’s the best way I’ve found to get my data back, and kudos to Flickr that the API is still there to make it possible. I tried a few things and flickrmirrorer looks best. It’s straightforward for pictures; some older videos need downloading by hand with it. As far as I can tell it gets all the metadata for each photo. No comments though, and no notes that I can see.

Because of the video issue I did images first, leaving it overnight (forgot to time it)

mkdir pictures
cd pictures/
mkdir flickr
cd flickr/
git clone https://github.com/markdoliner/flickrmirrorer
cd flickrmirrorer/
 sudo easy_install flickrapi
mkdir data
./flickrmirrorer.py --ignore-videos --statistics /Users/libby/personal/pictures/flickr/flickrmirrorer/data/

Output:

New photos / videos: 12921
Deleted photos / videos: 0
Modified photos /videos: 0
Modified albums: 198
Modified collections: 0

Check it matches the volume of data from your stats page (roughly might be all you can hope for; there’s a problem with flickr’s reporting)

du -sh .

check a couple to make sure we’re actually getting some data

open ./data/photostream/72332206.jpg
cat ./data/photostream/72332206.jpg.metadata

Then video:

./flickrmirrorer.py --ignore-photos --statistics /Users/libby/personal/pictures/flickr/flickrmirrorer/data/

downloading about 50 by hand.

I was worried I didn’t have the metadata for some of them, so I hacked together a script that just got all the video metadata – which is here.

I also wanted a list of my favourites – I rolled my own script for that, here. I hardcoded the number of pages, sorry!

There doesn’t seem to be any way to get notes, which sucks.

To use these two scripts you need to get an api key from flickr here.

I’m really really annoyed about all the cool urls I’ll kill because of this. Oh well.

Update: Matthew in the comments thought that notes were available from this api, and he’s right. flickrmirrorer doesn’t get them (and actually doesn’t get as much metadata as I want) so I grabbed all the ids of my photos using the dump from flickrmirrorer as a starting point:

find . | grep "\.metadata" > list_of_photos.txt

and then use this script to get as much metadata as I can.

I also realised I didn’t have a list of my friends 😦 So, I wrote one more script to do that.

Outline a bitmap in Inkscape

I keep doing this for lasercuts but getting a double outline instead of a single outline, and so a double cut. This is because (apparently) Inkscape doesn’t do “centre line tracing”. For simple shapes there’s a work-around though:

  • paste the bitmap into a document
  • Path -> trace bitmap, inverting the image
  • Delete the image, select the trace
  • Path-> stroke to path
  • delete the square outline of the image

I’ve put a video of this on flickr.

Libbybot – a presence robot with Chromium 51, Raspberry Pi and RTCMultiConnection for WebRTC

Edit, July 2017: I’ve put detailed instructions and code in github. You should follow those if you really want to try it (more).

I’ve been working on a cheap presence robot for a while, gradually and with help from buddies at hackspace and work. I now use it quite often to attend meetings, and I’m working with Richard on ways to express interest, boredom and other emotions at a distance, expressed using physical motion (as well as greetings and ‘there’s someone here’) .

I’ve noticed a few hits on my earlier instructions / code, so thought it was worth updating my notes a bit.

(More images and video on Flickr)

The main hardware change is that I’ve dispensed with the screen, which in its small form wasn’t very visible anyway. This has led to a rich seam of interesting research around physical movement: it needs to show that someone is there somehow, and it’s nice to be able to wave when someone comes near. It’s also very handy to be able to move left and right to talk to different people. It’s gone through a “bin”-like iteration, where the camera popped out of the top, a “The Stem“-inspired two sticks version (one stick for the camera, one to gesture), and is now a much improved IKEA ESPRESSIVO lamp hack with the camera and gesture “stick” combined again. People like this latest version much more than the bin or the sticks, though I haven’t yet tried it in situ. Annoyingly the lamp itself is discontinued, a pity because it fits a Pi3 (albeit with a right angled power supply cable) and some servos (using servos on the Pi directly with ServoBlaster) rather nicely.

The main software change is that I’ve moved it from EasyRTC to RTCMultiConnection, because I couldn’t get EasyRTC to work with data+audio (rather than data+audio+video) for some reason. I like RTCMultiConnection a lot – it’s both simple and flexible (I get the impression EasyRTC was based on older code while RTCMultiConnection has been written from scratch). The RTCMultiConnection examples and code snippets were also easier to adapt for my own, slightly obscure, purposes.

I’ve also moved from using a Jabra speaker / mic to a Sennheiser one. The Jabra (while excellent for connecting to laptops for improving the sound on Skype and similar meetings) was unreliable on the Pi, dropping down to low volume and with the mic periodically not working with Chromium (even when used with  a powered USB hub). The Sennheiser one is (even) pricer but much more reliable.

Hope that helps, if anyone else is trying this stuff. I’ll post the code sometime soon. Most of this guide still holds.

view_from_user

 

 

 

 

Immutable preferences, economics, social media and algorithmic recommendations

One of the things that encouraged me to leave economics after doing a PhD was that – at the time, and still in textbook microeconomics – the model of a person was so basic it could not encompass wants and needs that change.

You, to an economist, usually look like this:

217px-simple-indifference-curves-svg

“Indifference Curves” over two goods by SilverStar at English WikipediaCC BY 2.5

You have (mathematically-defined) “rational” preferences between goods and services, and these preferences are assumed to stay the same. Since I’d done a degree which encompassed philosophy and politics as well as economics this annoyed me tremendously. What about politics? arguing? advertising? newspapers? alcohcol? moods? caffeine? sleepiness? Economics works by using simplified models, but the models were far too simplistic to encompass effects I thought were interesting. The wonderful, now-dead M. O. L. Bacharach helped me understand game theory which had a more sophisticated model of interactions and behaviour. Eventually I found Kahneman and Tversky‘s work on bounded rationality. As part of my PhD I came across the person-time-slices work of Derek Parfit and the notion of discontinuous personhood.

Ten years after I left the Economics, behavioural economics (which spawned Nudge, advertising principles applied to behavoural change) became mainstream. Scroll forward twenty years and I can see a simplistic view of the things that people want appearing again, but this time as media recommendations and social media content-stream personalisation. Once again there’s an underlying assumption that there’s something fundamental to us about our superficial wants, and that these “preferences” are immutable.

It’s naive to assume that because I have bought a lamp that I’ll want to buy more lamps and therefore lamps should follow me across the internet. It’s silly to assume that because I watched Midsomer Murders repeats last night while programming I’ll also want to watch it this evening with my partner. It’s against the available evidence to assume that my preferences will not change if I am constantly subjected to a stream of people or other sources expressing particular views. It’s cynical to base a business model on advertising and simultaneously claim that filtering algorithms used in social media do not affect behaviour. My options and wants are not immutable: they depend on the media I see and hear, as well as how I feel and who I’m with, where I go, and who I talk to. It’s not just about variations on a theme of me either: they can and will change over time.

I’m looking at the media side of this in my day job. I think personalised media recommendations are wrongheaded in that they assume there’s a fundamental “me” to be addressed; and I think that hyper-personalised recommendations can be hugely damaging to people and civic society. I think that negotiated space between people of different options is an essential component of democracy and civilised living. I think a part of this is giving people the opportunity and practice of negotiating their shared media space by using media devices together. So that’s what we’re doing.

Anyway. Rant over. Back to libbybot.