Moving to Kolab from Google mail (“G-Suite”)

I had a seven year old google ‘business’ account (now called ‘G-Suite’, previously Google Apps, I think) from back when it was free. Danbri put me onto it, and it was brilliant because you can use your own domain with it. It’s been very useful, but I’ve been thinking of moving to a paid-for service for a while. Kolab Now have strong security and privacy, a readable ToS, and with a group account you can have your own domain. I finally moved my mail today.

It wasn’t hard but this isn’t the sort of thing I do a lot and the information I found was a bit piecemeal. So here are some hints in case it’s useful to anyone else.

  1. Sign up for Kolab Now. You’ll need another email address to do that from, i.e. not the one you’re moving. You’ll get an invoice to pay a bit later to that email (I was a bit surprised not to pay immediately). It’s about £5 / month for their group ‘lite’ service.
  2. Archive your old email from Google. I used gmvault‘s mac os x binary. I was originally intending to sync my email with my new account, but storing as much as I have costs substantially more on Kolab Now, so I’ve make an archive instead (which I can still access from mail.app). I had 14 GB in there, which took ~ 10 hours
    ./gmvault sync me@mydomain.org
    ./gmvault export ~/old_email
  3. Log into the admin account of your google account and delete all the non-admin users. You get the option to download their data as you go, which is an excellent feature, kudos to Google.
  4. Actually deleting the account required going to ‘Billing’ as described here in step 5
  5. Login to your domain name registrar and delete the Google MX records
  6. Add the Kolab Now TXT or CNAME records to verify you own the domain (once logged in, Kolab’s instructions for this are under ‘hosting‘)
  7. Once verified, hosting -> show DNS records will contain the new Kolab MX records, which you’ll need to add to your domain DNS
  8. Straight away you should be able to see email in the Kolab Now web client, and depending on the TTL you’ve set, shortly afterwards to send to it
  9. Mail.app IMAP instructions are here
  10. To access the archive you made, use the mail.app’s import function.

A simple Raspberry Pi-based picture frame using Flickr

I made this Raspberry Pi picture frame – initially with a screen – as a present for my parents for their wedding anniversary. After user testing, I realised that what they really wanted was a tiny version that they could plug into their TV, so I recently made a Pi Zero version to do that.

It uses a Raspberry Pi 3 or Pi Zero with full-screen Chromium. I’ve used Flickr as a backend: I made a new account and used their handy upload-by-email function (which you can set to make uploads private) so that all members of the family can send pictures to it.

frame

I initially assumed that a good frame would be ambient – stay on the same picture for say, 5 minutes, only update itself once a day, and show the latest pictures in order. But user testing – and specifically an uproarious party where we were all uploading pictures to it and wanted to see them immediately – forced a redesign. It now updates itself every 15 minutes, uses the latest two plus a random sample from all available pictures, shows each picture for 20 seconds, and caches the images to make sure that I don’t use up all my parents’ bandwidth allowance.

The only tricky technical issue is finding the best part of the picture to display. It will usually be a landscape display (although you could configure a Pi screen to be vertical), so that means you’re either going to get a lot of black onscreen or you’ll need to figure out a rule of thumb. On a big TV this is maybe less important. I never got amazing results, but I had a play with both heuristics and face detection, and both moderately improved matters.

It’s probably not a great deal different to what you’d get in any off the shelf electronic picture frame, but I like it because it’s fun and easy to make, configurable and customisable. And you can just plug it into your TV. You could make one for several members of a group or family based on the same set of pictures, too.

Version 1: Pi3, official touchscreen (you’ll need a 2.5A power supply to power them together), 8GB micro SD card, and (if you like) a ModMyPi customisable Pi screen stand.

Version 2: Pi Zero, micro USB converter, USB wifi, mini HDMI converter, HDMI cable, 8GB micro SD card, data micro USB cable, maybe a case.

The Zero can’t really manage the face detection, though I’m not convinced it matters much.

It should take < 30 minutes.

The code and installation instructions are here.

 

 

LIRC on a Raspberry Pi for a silver Apple TV remote

The idea here is to control a full-screen chromium webpage via a simple remote control (I’m using it for full-screen TV-like prototypes). It’s very straightforward really, but

  • I couldn’t find the right kind of summary of LIRC, the linux infrared remote tool
  • It look me a while to realise that the receiver end of IR was so easy if you have a GPIO (as on a Raspberry Pi) rather than using USB
  • Chromium 51 is now the default on Raspberry Pi’s Jessie, making it easy to do quite sophisticated web interfaces on the Pi3

The key part is the mapping between what LIRC (or its daemon version) does when it hears a keypress and what you want it to do. Basically there’s

It’s all a bit confusing as we’re using remote key presses mapped to the nearest keyboard commands in HTML in Javascript. But it works perfectly.

I’ve provided an example lircd.conf for silver Apple TV remotes, but you can generate your own for any IR remote easily enough.

The way I’ve used xdotool assumes there’s a single webpage open in chromium with a specific title – “lirc-example”.

Full instructions are in github.

ir_pi3

Links

lirc

xdotool

irexec

Twitter for ESP 8266

I’ve been using the Wemos D1 Mini ESP 8266 for lots of projects, and I like it very much – being an ESP, it’s got wifi, you can use it straightforwardly with Arduino libraries, it has plenty of pins and uses USB micro rather than mini. Dirk introduced me to a nice library to make it broadcast an access point if it can’t get online. There are loads of D1 Minis on ebay.

I wanted to make a tweeting spatula for Jasmine because of her brilliant work on CAKE, the Cook Along Kitchen Experience. Seemed like the obvious thing to do. But Twitter uses OAuth 2.0, which is reasonably complicated and not many people want to code it, especially on such a constrained platform. Most people use some sort of IFTTT type mechanism, connecting accounts on the cloud side, or run a server-side proxy themselves to do the heavy lifting.

But four months ago, Soramimi wrote a twitter client for the ESP, including from-scratch implementations of base64 and sha1! It’s amazing. So that’s what I used. But I barely changed it, just adding the AP library and having it tweet when a gyroscope hit a particular threshold (using the Adafruit sensor library) rather than via a webpage.

img_6910

I used a small rechargable battery, and some cable webbing to cover it all in a handle  – Jon Tutcher’s idea 🙂

The wiring’s straightforward – here’s a picture from when I was moving from the plugged to a soldered version.

IMG_6903.JPG

I think maybe an accelerometer might work better – I’m going to try these cheapo ones from Pimoroni next. The only issue I really had (and it was annoying, but there we go) was that I kept running out of memory on the D1 because, it turned out, of this line:

  if(event.gyro.x > 0.7 && event.gyro.y > 0.7 && event.gyro.z > 0.7){

The error is:

[...]
../../../xtensa-lx106-elf/bin/ld: Gyro_ESP_Twitter.ino.elf section `.text' will not fit in region `iram1_0_seg'
collect2: error: ld returned 1 exit status
exit status 1
Error compiling for board WeMos D1 R2 & mini.

I was convinced it was all the libraries I was using, but no. The line corrected to

 const float threshold = 0.7;
 if(event.gyro.x > threshold && event.gyro.y > threshold && event.gyro.z > threshold){

worked perfectly. I guess otherwise you get 3 x float created for every loop. Oops 🙂

 

A speaking camera using Pi3 and Tensorflow

Danbri made one of these and I was so impressed I had a go myself, with a couple of tweaks. It’s very easy to do. He did all the figuring out what needed to be done – there’s something similar here which did the rounds recently. Others have done the really heavy lifting – in particular, making tensorflow work on the Pi.

Barnoid has done lovely things with a similar system but cloud based setup for his Poetoid Lyricam – he used a captioner similar to this one that isn’t quite on a pi with python hooks yet (but nearly) (Barnoid update – he used Torch with neuraltalk2.

The gap between taking a photo and it speaking is 3-4 seconds. Sometimes it seems to cache the last photo. It’s often wrong 🙂

I used USB audio and a powerfulish speaker. A DAC would also be a good idea.

Instructions

Image the pi and configure

diskutil list
diskutil unmountDisk /dev/diskN
sudo dd bs=1m if=~/Downloads/2016-09-23-raspbian-jessie.img of=/dev/rdiskN

log in to the pi, expand file system, enable camera

sudo raspi-config

optionally, add in usb audio or a DAC

Test

speaker-test

install pico2wave (I tried espeak but it was very indistinct)

sudo pico /etc/apt/sources.list

# Uncomment line below then 'apt-get update' to enable 'apt-get source'
deb-src http://archive.raspbian.org/raspbian/ jessie main contrib non-free rpi

sudo apt-get update
sudo apt-get install fakeroot debhelper libtool help2man libpopt-dev hardening-wrapper autoconf
sudo apt-get install automake1.1 # requires this version
mkdir pico_build
cd pico_build
apt-get source libttspico-utils
cd svox-1.0+git20130326 
dpkg-buildpackage -rfakeroot -us -uc
cd ..
sudo dpkg -i libttspico-data_1.0+git20130326-3_all.deb
sudo dpkg -i libttspico0_1.0+git20130326-3_armhf.deb
sudo dpkg -i libttspico-utils_1.0+git20130326-3_armhf.deb

test

sudo apt-get install mplayer
pico2wave -w test.wav "hello alan" | mplayer test.wav

install tensorflow on raspi

sudo apt-get install python-pip python-dev
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/raw/master/bin/tensorflow-0.10.0-cp27-none-linux_armv7l.whl
sudo pip install tensorflow-0.10.0-cp27-none-linux_armv7l.whl
install prerequisitites for classify_image.py
git clone https://github.com/tensorflow/tensorflow.git # takes ages
sudo pip install imutils picamera 
sudo apt-get install python-opencv

test

cd /home/pi/tensorflow/tensorflow/models/image/imagenet

install danbri / my hacked version of classify_image.py

mv classify_image.py classify_image.py.old
curl -O "https://gist.githubusercontent.com/libbymiller/afb715ac53dcc7b85cd153152f6cd75a/raw/2224179cfdc109edf2ce8408fe5e81ce5a265a6e/classify_image.py"

run

python classify_image.py

done!