Pi / openCV / Tensorflow again

Cat detector code updated for Raspian Buster. I used lite. A few things have changed since the last time. The code is here.

Download Raspian

I got Raspbian Buster Lite (from https://www.raspberrypi.org/downloads/raspbian/ )

Burn it onto a SD card.

touch /Volumes/boot/ssh
Add the wifi
nano /Volumes/boot/wpa_supplicant.conf

The file should containing something like:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev

then eject the card and put it in the pi.

ssh into it from your laptop

password: raspberry
sudo nano /etc/hosts
sudo nano /etc/hostname


sudo reboot

Set up a virtualenv for python

This is not strictly necessary but keeps things tidy. You can also just use the built in python, just make sure you are using python3 and pip3 if so.

ssh into the pi again, then:

sudo apt update
sudo apt-get install python3-pip
sudo pip3 install virtualenv
virtualenv env
source env/bin/activate
(env) pi@birdbot:~ $ python --version
Python 3.7.3 # or similar

Enable the camera

sudo raspi-config # and enable camera under 'interfacing'; reboot

Install Tensorflow

Increase the swap size:

sudo nano /etc/dphys-swapfile

The default value in Raspbian is:


We will need to change this to:


Restart the service that manages the swapfile own Raspbian:

sudo /etc/init.d/dphys-swapfile restart

Install tensorflow dependencies

sudo apt-get install libatlas-base-dev
sudo apt-get install git
pip install --upgrade tensorflow

(this takes a few minutes)

Test that tensorflow installed ok:

python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"

You may see an error about hadoop –

HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory.

See also tensorflow/tensorflow#36141 and tensorflow/tensorflow#36141. That doesn’t seem to matter.

You could try some user built tensorflow binaries – I tried this one, which seemed to corrupt my SD card, but not tried this one. Tensorflow 2 would be better to learn (the apis all changed between 1.4 and 2).

Install OpenCV

sudo apt-get install libjasper-dev libqtgui4 libqt4-test libhdf5-dev libharfbuzz0b libilmbase-dev libopenexr-dev libgstreamer1.0-dev libavcodec-dev libavformat-dev libswscale5

pip install opencv-contrib-python== #(see this)


python -c 'import cv2; print(cv2.__version__)'

Install camera dependencies

pip install imutils picamera

Install speaking dependencies

sudo apt-get install espeak-ng


git clone https://github.com/libbymiller/cat_loving_robot
cd cat_loving_robot
python classify_image.py

If you want to add the servos and so on for cat detecting and running towards cats, or start it up automatically, there’s more info in github.

Links for my Pervasive Media Studio talk

I’m giving a talk this afternoon at the Pervasive Media Studio in Bristol about some low-resolution people experiments I’ve been making. Here are some related links:

Exploring the Affect of Abstract Motion in Social Human-Robot Interaction, John Harris and Ehud Sharlin

Libbybot 11 code and instructions

Zamia and scripts for running it on the Raspberry Pi 3

Real_libby code


Matt Jones – Butlers or Centaurs

Tim Cowlishaw‘s colab notebook for GPT-2 retraining

Janelle Shane‘s site and book

This is the voice synthesis code that my colleagues used. They are hoping to open up their version of it soon.

There’s a voice assistant survey that uses the synthesised voices and there’s a blog post about that.

Pimoroni sell Respeaker 4-mics.

I’m libbymiller on twitter.


Tensorflow – saveModel for tflite

I want to convert an existing model to one that will run on a USB stick ‘accelerator’ called Coral. Conversion to tflite is needed for any small devices like these.

I’ve not managed this yet, but here are some notes.  I’ve figured out some of it, but come unstuck in that some operations (‘ops’) are not supported in tflite yet. But maybe this is still useful to someone, and I want to remember what I did.

I’m trying to change a tensorflow model – for which I only have .meta and .index files – to one with .pb files or variables, which seems to be called a ‘savedModel’. These have some interoperability, and appear to be a prerequisite for making a tflite model.

Here’s what I have to start with:

ls models/LJ01-1/

Conversion to SavedModel

First, create a savedModel (this code is for Tensorflow 1.3, but 2.0 is a simple conversion using a command-line tool).

import tensorflow as tf
model_path = 'LJ01-1/model_gs_933k'
output_node_names = ['Merge_1/MergeSummary']    
loaded_graph = tf.Graph()

with tf.Session(graph=loaded_graph) as sess:
    # Restore the graph
    saver = tf.train.import_meta_graph(model_path+'.meta')
    # Load weights
    # Freeze the graph
    frozen_graph_def = tf.graph_util.convert_variables_to_constants(

    builder = tf.saved_model.builder.SavedModelBuilder('new_models')
    op = sess.graph.get_operations()
    input_tensor = [m.values() for m in op][1][0]
    output_tensor = [m.values() for m in op][len(op)-1][0]

    # https://sthalles.github.io/serving_tensorflow_models/
    tensor_info_input = tf.saved_model.utils.build_tensor_info(input_tensor)
    tensor_info_output = tf.saved_model.utils.build_tensor_info(output_tensor)
    prediction_signature = (
         inputs={'x_input': tensor_info_input},
         outputs={'y_output': tensor_info_output},
    builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],

I used

output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]

to find out the names of the input and output ops.

That gives you a directory (new_models) like


Conversion to tflite

Once you have that, then you can use the command-line tool tflite_convert (examples) – 

tflite_convert --saved_model_dir=new_models --output_file=model.tflite --enable_select_tf_ops

This does the conversion to tflite. And it will probably fail, e.g. mine did this:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CAST, CONCATENATION, CONV_2D, DIV, EXP, EXPAND_DIMS, FLOOR, GATHER, GREATER_EQUAL, LOGISTIC, MEAN, MUL, NEG, NOT_EQUAL, PAD, PADV2, RSQRT, SELECT, SHAPE, SOFTMAX, SPLIT, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, SUM, TRANSPOSE, ZEROS_LIKE. Here is a list of operators for which you will need custom implementations: BatchMatMul, FIFOQueueV2, ImageSummary, Log1p, MergeSummary, PaddingFIFOQueueV2, QueueDequeueV2, QueueSizeV2, RandomUniform, ScalarSummary.

You can add –allow_custom_ops to that, which will let everything through – but it still won’t work if there are ops that are not tflite supported – you have to write custom operators for the ones that don’t yet work (I’ve not tried this).

But it’s still useful to use –allow_custom_ops, i.e.

tflite_convert --saved_model_dir=new_models --output_file=model.tflite --enable_select_tf_ops --allow_custom_ops

because you can visualise the graph once you have a tflite file, using netron. Which is quite interesting, although I suspect it doesn’t work for the bits which it passed through but doesn’t support.

>>> import netron 
>>> netron.start('model.tflite')
Serving 'model.tflite' at http://localhost:8080

Update – I forgot a link:

“This document outlines how to use TensorFlow Lite with select TensorFlow ops. Note that this feature is experimental and is under active development. “

Real_libby – a GPT-2 based slackbot

In the latest of my continuing attempts to automate myself, I retrained a GPT-2 model with my iMessages, and made a slackbot so people could talk to it. Since Barney (an expert on these matters) felt it was unethical that it vanished whenever I shut my laptop, it’s now living happily(?) if a little more slowly in a Raspberry Pi 4.

Screen Shot 2019-07-20 at 12.19.24It was surprisingly easy to do, with a few hints from Barney. I’ve sketched out what I did below. If you make one, remember that it can leak out private information – names in particular – and can also be pretty sweary, though mine’s not said anything outright offensive (yet).

fuck, mitzhelaists!

This work is inspired by the many brilliant Twitter bot-makers  and machine-learning people out there such as Barney, (who has many bots, including inspire_ration and notYourBot, and knows much more about machine learning and bots than I do), Shardcore (who made Algohiggs, which is probably where I got the idea for using GPT-2),  and Janelle Shane, (whose ML-generated names for e.g. cats are always an inspiration).

First, get your data

The first step was to get at my iMessages. A lot of iPhone data is backed up as sqlite, so if you decrypt your backups and have a dig round, you can use something like baskup. I had to make a few changes but found my data in

/Users/[me]/Library/Application\ Support/MobileSync/Backup/[long number]/3d/3d0d7e5fb2ce288813306e4d4636395e047a3d28

This number – 3d0d7e5fb2ce288813306e4d4636395e047a3d28 – seems always to indicate the iMessage database – though it moves round depending on what version of iOS you have. I made a script to write the output from baskup into a flat text file for GPT-2 to slurp up. I had about 5K lines.

Retrain GPT-2

I used this code.

python3 ./download_model.py 117M

PYTHONPATH=src ./train.py --dataset /Users/[me]/gpt-2/scripts/data/

I left it overnight on my laptop and by morning loss and avg were oscillating so I figured it was done – 3600 epochs. The output from training was fun, e.g..

([2899 | 33552.87] loss=0.10 avg=0.07)

my pigeons get dandruff
treehouse actually get little pellets
little pellets of the same stuff as well, which I can stuff pigeons with
little pellets?
little pellets?
little pellets?
little pellets?
little pellets?
little pellets?
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets

Test it

I copied the checkpoint directory into the models directory

cp -r checkpoint/run1 models/libby
cp models/117M/{encoder.json,hparams.json,vocab.bpe} models/libby/

At which point I could test it using the code provided:

python3 src/interactive_conditional_samples.py --model_name libby

This worked but spewed out a lot of text, very slowly. Adding –length 20 sped it up:

python3 src/interactive_conditional_samples.py --model_name libby --length 20

Screen Shot 2019-07-20 at 13.05.06

That was the bulk of it done! I turned interactive_conditional_samples.py into a server and then whipped up a slackbot – it responds to direct questions and occasionally to a random message.

Putting it on a Raspberry Pi 4 was very very easy. Startlingly so.

Screen Shot 2019-07-20 at 13.11.10

It’s been an interesting exercise, and mostly very funny. These bots have the capacity to surprise you and come up with the occasional apt response (I’m cherrypicking)

Screen Shot 2019-07-20 at 14.25.00

We’ve been talking a lot at work about personal data and what we would do with our own, particularly messages with friends and the pleasure of scrolling back and finding old jokes and funny messages. My messages were mostly of the “could you get some milk?” “here’s a funny picture of the cat” type, but it covered a long period and there were also two very sad events in there. Parsing the data and coming across those again was a vivid reminder that this kind of personal data can be an emotional minefield and not something to be trivially messed with by idiots like me.

Also: while GPT-2 means there’s plausible deniability about any utterance, a bot like this can leak personal information of various kinds, such as names and regurgitated fragments of real messages. Unsurprisingly it’s not the kind of thing I’d be happy making public as is, and I’m not sure if it ever could be.



An i2c heat sensor with a Raspberry Pi camera

I had a bit of a struggle with this so thought it was worth documenting. The problem is this – the i2c bus on the Raspberry Pi is used by the official camera to initialise it. So if you want to use an i2c device at the same time as the camera, the device will stop working after a few minutes. Here’s more on this problem.

I really wanted to use this heatsensor with mynaturewatch to see if we could exclude some of the problem with false positives (trees waving in the breeze and similar). I’ve not got it working well enough yet to look at this problem in detail. But, I did get it working with the 12c bus with the camera – here’s how.

Screen Shot 2019-03-22 at 12.31.04

It’s pretty straightforward. You need to

  • Create a new i2c bus on some different GPIOs
  • Tell the library you are using for the non-camera i2c peripheral to use these instead of the default one
  • Fin

1. Create a new i2c bus on some different GPIOs

This is super-easy:

sudo nano /boot/config.txt

Add the following line of code, preferable in the section where spi and i2c is enabled.


This line will create an aditional i2c bus (bus 3) on GPIO 23 as SDA and GPIO 24 as SCL (GPIO 23 and 24 is defaults)

2. Tell the library you are using for the non-camera i2c peripheral to use these instead of the default one

I am using this sensor, for which I need this circuitpython library (more info), installed using:

pip3 install Adafruit_CircuitPython_AMG88xx

While the pi is switched off, plug in the i2c device using pins 23 for SDA and GPIO 24 for SDL, and then boot it up and check it’s working:

 i2cdetect -y 3

Make two changes:

nano /home/pi/.local/lib/python3.5/site-packages/adafruit_blinka/microcontroller/bcm283x/pin.py

and change the SDA and SCL pins to the new pins

#SDA = Pin(2)
#SCL = Pin(3)
SDA = Pin(23)
SCL = Pin(24)
nano /home/pi/.local/lib/python3.5/site-packages/adafruit_blinka/microcontroller/generic_linux/i2c.py

Change line 21 or thereabouts to use the i2c bus 3 rather than the default, 1:

self._i2c_bus = smbus.SMBus(3)

3. Fin

Start up your camera code and your i2c peripheral. They should run happily together.

Screen Shot 2019-03-25 at 19.12.21