Capturing button presses from bluetooth hands free kits on a Raspberry Pi

Is there anything better than this wonky and unreliable hack for capturing keypresses from a handsfree kit?

sudo hciconfig hci0 down
sudo hciconfig hci0 up
sudo hcidump -l 1 | grep ACL

As the kit connects, I see in syslog

Sep 27 21:17:10 gvoice bluetoothd[532]: Unable to get connect data for Hands-Free Voice gateway: getpeername: Transport endpoint is not connected (107)

Sep 27 21:17:10 gvoice bluetoothd[532]: Unable to get connect data for Headset Voice gateway: getpeername: Transport endpoint is not connected (107)

I can see it appearing as

Sep 27 21:14:29 gvoice kernel: [  827.342038] input: B8:D5:0B:4C:CF:59 as /devices/virtual/input/input6

evtest gives

sudo evtest
No device specified, trying to scan all of /dev/input/event*
Available devices:
/dev/input/event0: B8:D5:0B:4C:CF:59
Select the device event number [0-0]: 0
[...]
    Event code 402 (KEY_CHANNELUP)
    Event code 403 (KEY_CHANNELDOWN)
    Event code 405 (KEY_LAST)
  Event type 2 (EV_REL)
Key repeat handling:
  Repeat type 20 (EV_REP)
    Repeat code 0 (REP_DELAY)
      Value    300
    Repeat code 1 (REP_PERIOD)
      Value     33
Properties:
Testing ... (interrupt to exit)

but there are never any events.

(I’m asking as I have it nicely hooked up to google voice recogniser via aiy. But it needs (and I want) a button press to trigger it. With a bit of twiddling hands free kits and bluetooth headsets work nicely with Raspian Stretch.)

Leaving Flickr

I’m very sad about this, especially because of all the friends I have made on Flickr, but with Verizon’s acquisition of Yahoo (and so Flickr) and the consequent sharing of Flickr user data with the new “Oath” “”””family”””” I feel like it’s time for me to admit just how shit Flickr has become and finally leave. I’ve been using it (and paying for it) for 10 years though, so I’ve a lot of pictures, about 13K in number, about 23G. I’ve got all my data and will delete my account tomorrow (which I think it’s their deadline, but they seem confused about it).

It’s been a busy week so I don’t know what I’ll replace it with yet, maybe something simple and static I’ll write myself like the thing I had 11 years ago, with some, I dunno, RSS feeds or something. But anyway, here’s the best way I’ve found to get my data back, and kudos to Flickr that the API is still there to make it possible. I tried a few things and flickrmirrorer looks best. It’s straightforward for pictures; some older videos need downloading by hand with it. As far as I can tell it gets all the metadata for each photo. No comments though, and no notes that I can see.

Because of the video issue I did images first, leaving it overnight (forgot to time it)

mkdir pictures
cd pictures/
mkdir flickr
cd flickr/
git clone https://github.com/markdoliner/flickrmirrorer
cd flickrmirrorer/
 sudo easy_install flickrapi
mkdir data
./flickrmirrorer.py --ignore-videos --statistics /Users/libby/personal/pictures/flickr/flickrmirrorer/data/

Output:

New photos / videos: 12921
Deleted photos / videos: 0
Modified photos /videos: 0
Modified albums: 198
Modified collections: 0

Check it matches the volume of data from your stats page (roughly might be all you can hope for; there’s a problem with flickr’s reporting)

du -sh .

check a couple to make sure we’re actually getting some data

open ./data/photostream/72332206.jpg
cat ./data/photostream/72332206.jpg.metadata

Then video:

./flickrmirrorer.py --ignore-photos --statistics /Users/libby/personal/pictures/flickr/flickrmirrorer/data/

downloading about 50 by hand.

I was worried I didn’t have the metadata for some of them, so I hacked together a script that just got all the video metadata – which is here.

I also wanted a list of my favourites – I rolled my own script for that, here. I hardcoded the number of pages, sorry!

There doesn’t seem to be any way to get notes, which sucks.

To use these two scripts you need to get an api key from flickr here.

I’m really really annoyed about all the cool urls I’ll kill because of this. Oh well.

Update: Matthew in the comments thought that notes were available from this api, and he’s right. flickrmirrorer doesn’t get them (and actually doesn’t get as much metadata as I want) so I grabbed all the ids of my photos using the dump from flickrmirrorer as a starting point:

find . | grep "\.metadata" > list_of_photos.txt

and then use this script to get as much metadata as I can.

I also realised I didn’t have a list of my friends 😦 So, I wrote one more script to do that.

Outline a bitmap in Inkscape

I keep doing this for lasercuts but getting a double outline instead of a single outline, and so a double cut. This is because (apparently) Inkscape doesn’t do “centre line tracing”. For simple shapes there’s a work-around though:

  • paste the bitmap into a document
  • Path -> trace bitmap, inverting the image
  • Delete the image, select the trace
  • Path-> stroke to path
  • delete the square outline of the image

I’ve put a video of this on flickr.

Libbybot – a presence robot with Chromium 51, Raspberry Pi and RTCMultiConnection for WebRTC

Edit, July 2017: I’ve put detailed instructions and code in github. You should follow those if you really want to try it (more).

I’ve been working on a cheap presence robot for a while, gradually and with help from buddies at hackspace and work. I now use it quite often to attend meetings, and I’m working with Richard on ways to express interest, boredom and other emotions at a distance, expressed using physical motion (as well as greetings and ‘there’s someone here’) .

I’ve noticed a few hits on my earlier instructions / code, so thought it was worth updating my notes a bit.

(More images and video on Flickr)

The main hardware change is that I’ve dispensed with the screen, which in its small form wasn’t very visible anyway. This has led to a rich seam of interesting research around physical movement: it needs to show that someone is there somehow, and it’s nice to be able to wave when someone comes near. It’s also very handy to be able to move left and right to talk to different people. It’s gone through a “bin”-like iteration, where the camera popped out of the top, a “The Stem“-inspired two sticks version (one stick for the camera, one to gesture), and is now a much improved IKEA ESPRESSIVO lamp hack with the camera and gesture “stick” combined again. People like this latest version much more than the bin or the sticks, though I haven’t yet tried it in situ. Annoyingly the lamp itself is discontinued, a pity because it fits a Pi3 (albeit with a right angled power supply cable) and some servos (using servos on the Pi directly with ServoBlaster) rather nicely.

The main software change is that I’ve moved it from EasyRTC to RTCMultiConnection, because I couldn’t get EasyRTC to work with data+audio (rather than data+audio+video) for some reason. I like RTCMultiConnection a lot – it’s both simple and flexible (I get the impression EasyRTC was based on older code while RTCMultiConnection has been written from scratch). The RTCMultiConnection examples and code snippets were also easier to adapt for my own, slightly obscure, purposes.

I’ve also moved from using a Jabra speaker / mic to a Sennheiser one. The Jabra (while excellent for connecting to laptops for improving the sound on Skype and similar meetings) was unreliable on the Pi, dropping down to low volume and with the mic periodically not working with Chromium (even when used with  a powered USB hub). The Sennheiser one is (even) pricer but much more reliable.

Hope that helps, if anyone else is trying this stuff. I’ll post the code sometime soon. Most of this guide still holds.

view_from_user