Real_libby – a GPT-2 based slackbot

In the latest of my continuing attempts to automate myself, I retrained a GPT-2 model with my iMessages, and made a slackbot so people could talk to it. Since Barney (an expert on these matters) felt it was unethical that it vanished whenever I shut my laptop, it’s now living happily(?) if a little more slowly in a Raspberry Pi 4.

Screen Shot 2019-07-20 at 12.19.24It was surprisingly easy to do, with a few hints from Barney. I’ve sketched out what I did below. If you make one, remember that it can leak out private information – names in particular – and can also be pretty sweary, though mine’s not said anything outright offensive (yet).

fuck, mitzhelaists!

This work is inspired by the many brilliant Twitter bot-makers  and machine-learning people out there such as Barney, (who has many bots, including inspire_ration and notYourBot, and knows much more about machine learning and bots than I do), Shardcore (who made Algohiggs, which is probably where I got the idea for using GPT-2),  and Janelle Shane, (whose ML-generated names for e.g. cats are always an inspiration).

First, get your data

The first step was to get at my iMessages. A lot of iPhone data is backed up as sqlite, so if you decrypt your backups and have a dig round, you can use something like baskup. I had to make a few changes but found my data in

/Users/[me]/Library/Application\ Support/MobileSync/Backup/[long number]/3d/3d0d7e5fb2ce288813306e4d4636395e047a3d28

This number – 3d0d7e5fb2ce288813306e4d4636395e047a3d28 – seems always to indicate the iMessage database – though it moves round depending on what version of iOS you have. I made a script to write the output from baskup into a flat text file for GPT-2 to slurp up. I had about 5K lines.

Retrain GPT-2

I used this code.

python3 ./download_model.py 117M

PYTHONPATH=src ./train.py --dataset /Users/[me]/gpt-2/scripts/data/

I left it overnight on my laptop and by morning loss and avg were oscillating so I figured it was done – 3600 epochs. The output from training was fun, e.g..

([2899 | 33552.87] loss=0.10 avg=0.07)

my pigeons get dandruff
treehouse actually get little pellets
little pellets of the same stuff as well, which I can stuff pigeons with
*little
little pellets?
little pellets?
little pellets?
little pellets?
little pellets?
little pellets?
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets

Test it

I copied the checkpoint directory into the models directory

cp -r checkpoint/run1 models/libby

At which point I could test it using the code provided:

python3 src/interactive_conditional_samples.py --model_name libby

This worked but spewed out a lot of text, very slowly. Adding –length 20 sped it up:

python3 src/interactive_conditional_samples.py --model_name libby --length 20

Screen Shot 2019-07-20 at 13.05.06

That was the bulk of it done! I turned interactive_conditional_samples.py into a server and then whipped up a slackbot – it responds to direct questions and occasionally to a random message.

Putting it on a Raspberry Pi 4 was very very easy. Startlingly so.

Screen Shot 2019-07-20 at 13.11.10

It’s been an interesting exercise, and mostly very funny. These bots have the capacity to surprise you and come up with the occasional apt response (I’m cherrypicking)

Screen Shot 2019-07-20 at 14.25.00

We’ve been talking a lot at work about personal data and what we would do with our own, particularly messages with friends and the pleasure of scrolling back and finding old jokes and funny messages. My messages were mostly of the “could you get some milk?” “here’s a funny picture of the cat” type, but it covered a long period and there were also two very sad events in there. Parsing the data and coming across those again was a vivid reminder that this kind of personal data can be an emotional minefield and not something to be trivially messed with by idiots like me.

Also: while GPT-2 means there’s plausible deniability about any utterance, a bot like this can leak personal information of various kinds, such as names and regurgitated fragments of real messages. Unsurprisingly it’s not the kind of thing I’d be happy making public as is, and I’m not sure if it ever could be.

 

 

An i2c heat sensor with a Raspberry Pi camera

I had a bit of a struggle with this so thought it was worth documenting. The problem is this – the i2c bus on the Raspberry Pi is used by the official camera to initialise it. So if you want to use an i2c device at the same time as the camera, the device will stop working after a few minutes. Here’s more on this problem.

I really wanted to use this heatsensor with mynaturewatch to see if we could exclude some of the problem with false positives (trees waving in the breeze and similar). I’ve not got it working well enough yet to look at this problem in detail. But, I did get it working with the 12c bus with the camera – here’s how.

Screen Shot 2019-03-22 at 12.31.04

It’s pretty straightforward. You need to

  • Create a new i2c bus on some different GPIOs
  • Tell the library you are using for the non-camera i2c peripheral to use these instead of the default one
  • Fin

1. Create a new i2c bus on some different GPIOs

This is super-easy:

sudo nano /boot/config.txt

Add the following line of code, preferable in the section where spi and i2c is enabled.

dtoverlay=i2c-gpio,bus=3,i2c_gpio_delay_us=1

This line will create an aditional i2c bus (bus 3) on GPIO 23 as SDA and GPIO 24 as SCL (GPIO 23 and 24 is defaults)

2. Tell the library you are using for the non-camera i2c peripheral to use these instead of the default one

I am using this sensor, for which I need this circuitpython library (more info), installed using:

pip3 install Adafruit_CircuitPython_AMG88xx

While the pi is switched off, plug in the i2c device using pins 23 for SDA and GPIO 24 for SDL, and then boot it up and check it’s working:

 i2cdetect -y 3

Make two changes:

nano /home/pi/.local/lib/python3.5/site-packages/adafruit_blinka/microcontroller/bcm283x/pin.py

and change the SDA and SCL pins to the new pins

#SDA = Pin(2)
#SCL = Pin(3)
SDA = Pin(23)
SCL = Pin(24)
nano /home/pi/.local/lib/python3.5/site-packages/adafruit_blinka/microcontroller/generic_linux/i2c.py

Change line 21 or thereabouts to use the i2c bus 3 rather than the default, 1:

self._i2c_bus = smbus.SMBus(3)

3. Fin

Start up your camera code and your i2c peripheral. They should run happily together.

Screen Shot 2019-03-25 at 19.12.21

Balena’s wifi-connect – easy wifi for Raspberry Pis

When you move a Raspberry Pi between wifi networks and you want it to behave like an appliance, one way to set the wifi network easily as a user rather than a developer is to have it create an access point itself that you can connect to with a phone or laptop, enter the wifi information in a browser, and then reconnect to the proper network. Balena have a video explaining the idea.

Andrew Nicolaou has written things to do this periodically as part of Radiodan. His most recent suggestion was to try Resin (now Balena)’s wifi-connect. Since Andrew last tried, there’s a bash script from Balena to install it as well as a Docker file, so it’s super easy with just a few tiny pieces missing. This is what I did to get it working:

Provision an SD card with Stretch e.g. using Etcher or manually

Enable ssh e.g. by

touch /Volumes/boot/ssh

Share your network with the pi via ethernet, ssh in and enable wifi by setting your country:

sudo raspi-config

then Localisation Options -> Set wifi country.

Install wifi-connect

bash <(curl -L https://github.com/balena-io/wifi-connect/raw/master/scripts/raspbian-install.sh)

Add a slightly-edited version of their bash script

curl https://gist.githubusercontent.com/libbymiller/e8fe6821e122e0a0ac921c8e557320a9/raw/46138fb4d28b494728e66515e46bd7d736b19132/start.sh > /home/pi/start-wifi-connect.sh

Add a systemd script to start it on boot.

sudo nano /lib/systemd/system/wifi-connect-start.service

-> contents:

[Unit]
Description=Balena wifi connect service
After=NetworkManager.service

[Service]
Type=simple
ExecStart=/home/pi/start-wifi-connect.sh
Restart=on-failure
StandardOutput=syslog
SyslogIdentifier=wifi-connect
Type=idle
User=root

[Install]
WantedBy=multi-user.target

Enable the systemd service

sudo systemctl enable wifi-connect-start.service

Reboot the pi

sudo reboot

A wifi network should come up called “Wifi Connect”. Connect to it, add in your details into the captive portal, and wait. The portal will go away and then you should be able to ping your pi over the wifi:

ping raspberrypi.local

(You might need to disconnect your ethernet from the Pi before connecting to the Wifi Connect network if you were sharing network that way).

Cat detector with Tensorflow on a Raspberry Pi 3B+

Like this

Edit: code is now here.

Download Raspian Stretch with Desktop    

Burn a card with Etcher.

(Assuming a Mac) Enable ssh

touch /Volumes/boot/ssh

Put a wifi password in

nano /Volumes/boot/wpa_supplicant.conf
country=GB
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
  ssid="foo"
  psk="bar"
}

Connect the Pi camera, attach a dial to GPIO pin 12 and ground, boot up the Pi, ssh in, then

sudo apt-get update
sudo apt-get upgrade
sudo raspi-config # and enable camera; reboot

install tensorflow

sudo apt install python3-dev python3-pip
sudo apt install libatlas-base-dev
pip3 install --user --upgrade tensorflow

Test it

python3 -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"

get imagenet

git clone https://github.com/tensorflow/models.git
cd ~/models/tutorials/image/imagenet
python3 classify_image.py

install openCV

pip3 install opencv-python
sudo apt-get install libjasper-dev
sudo apt-get install libqtgui4
sudo apt install libqt4-test
python3 -c 'import cv2; print(cv2.__version__)'

install the pieces for talking to the camera

cd ~/models/tutorials/image/imagenet
pip3 install imutils picamera
mkdir results

download edited version classify_image

curl -O https://gist.githubusercontent.com/libbymiller/d542d596566774a35752d134f80b1332/raw/471f066e4dc498501bab7731a07fa0c1926c1575/classify_image_dial.py

Run it, and point at a cat

python3 classify_image_dial.py

Etching on a laser cutter

I’ve been struggling with this for ages, but yesterday at Hackspace – thanks to Barney (and I now realise, Tiff said this too and I got distracted and never followed it up) – I got it to work.

The issue was this: I’d been assuming that everything you lasercut had to be a vector DXF, so I was tracing bitmaps using inkscape in order to make a suitable  SVG, converting to DXF, loading it into the lasercut software at hackspace, downloading it and – boom – “the polyline must be closed” for etching: no workie. No matter what I did it to the export in Inkscape or how I edited it, it just didn’t work.

The solution is simply to use a black and white png, with a non-transparent background. This loads directly into lasercut (which comes with JustAddSharks lasers) and…just…works.

As a bonus and for my own reference – I got good results with 300 speed / 30 power (below 30 didn’t seem to work) for etching (3mm acrylic).